On The Sleep of The Computer or The Perf PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

You

Performance Research Vol.21, No.1 'On Sleep' (February 2016)


http://www.tandfonline.com/doi/abs/10.1080/13528165.2016.1138762

On the Sleep of the Computer,


or the Performance of Randomness:
A Close Reading of Ralf Baecker’s Mirage

Mi You



Ever since Philip K. Dick published his celebrated science fiction novel Do Androids Dream of Electric
Sheep in 1968, the bewildering question of the title has triggered much fascination. How, indeed, do
computers dream? If the question back then was driven by the fascination and fear of mirroring the
human and the computer, hence concerned with whether artificial intelligence is capable of
generating passion, feeling and affection, then the recent release of Google DeepDream has elevated
our relation to computational dreaming to a new level. Google DeepDream is a learning algorithm
that generates ‘enhanced’ or ‘phantasized’ images to a given, machine-learnt input image. Images at
once bearing resemblance to the original, yet hauntingly distorted and grotesquely textured, went
viral in Internet communities. We have never got so close to, as it were, commanding the computer
to ‘dream’ while witnessing its ‘dream’.
At the heart of this Internet and media frenzy are the questions, on what grounds can we
human beings assert ourselves, if we are exposed to a computer’s dreams? And to whom, or even, to
what, do we attribute genuine creativity?
These are the questions that media artist Ralf Baecker attempts to address in his multimedia
installation Mirage (2014), featuring a sleeping computer. In what follows I will outline the work and
unveil its underlying computational process. Building on that, I will draw on Alfred North Whitehead
and Luciana Parisi to make a case for computation as imbued with speculative reason, yet not
captured under the anthropocentric regime of cognitive rationalization. It is based upon this analysis
that I see the work of Baecker as performance and claim its significance for performance research.
Specifically I see Mirage as anchored in the larger question of sleep as a creative act and as
emergence out of the virtual and the potential.

On the Installation, the Helmholtz Machine and the Problematic of Computer’s ‘Sleep’

Ralf Baecker’s Mirage is a projection apparatus that generates a synthesized landscape approximating
how a computer ‘dreams’. The embedded computer registers and processes real-time geodynamo
data through a fluxgate magnetometer, illustrating in numbers the activity of the magnetic field of

1
You Performance Research Vol.21, No.1 'On Sleep' (February 2016)
http://www.tandfonline.com/doi/abs/10.1080/13528165.2016.1138762

Earth influenced by the Sun and solar wind. At the core of the computer component is an algorithm
inspired by the principle of ‘the Helmholtz Machine’, an unsupervised learning algorithm used in
artificial neural network research that adopts ‘wake’ and ‘sleep’ phases to consolidate its neural
network. Given real-time geodynamo data input, the algorithm ‘dreams’ or ‘hallucinates’ variations of
data. The output is translated into a two-dimensional matrix with forty-eight muscle wires, which,
upon receiving pulses mechanically, manipulate the surface of a horizontally placed acrylic mirror
sheet. Through laser projection, the changing shape of the mirror sheet is made visible in the form of
a wavy, undulating landscape on the wall.
While the artistic translation of computer signals into enchanting visual forms is meticulously
and masterfully carried out, the process that concerns us the most lies in the algorithmic articulation
of a ‘dreaming computer’.
In machine learning, the Helmholtz Machine is a kind of unsupervised learning algorithm
adopted to find hidden patterns in unlabelled data. It functions without pre-given training examples
and it is the underlying principle of Google DeepDream. The basic idea behind it is to use a layered
neural network with a dual set of algorithms as a model for generating and recognizing patterns in the
world. According to the view of the Helmholtz Machine, the world is composed of patterns of
‘flickering bits’, and with each bit pattern that is externally observable in the world, there are some
probabilities, or inner representations, that inform and explain it. The Helmholtz Machine attempts to
‘explain’ or communicate in a compressed form the probability involved for each bit pattern, so it
doesn’t exhaust the system. The Helmholtz Machine does so by deploying a ‘wake--sleep’ algorithm,
or a dual network of recognition and generation.
In the wake phase, the recognition algorithm analyzes the input pattern so to estimate the
underlying generators for it. The generating algorithm in turn modifies in such a way that it will more
likely have given rise to the input pattern. In the sleep phase, the process is reversed. The generating
algorithm stochastically chooses the generators, hence ‘fantasizes’ the inputs, while the recognition
algorithm in turn adapts itself so as to have more likely witnessed this these fantasized inputs.
While this effective process of machine learning is dubbed ‘Deep Learning’, and duly evokes
an anthropomorphic image of a ‘learning’ brain, it is important not to equate computational process
and biophysical process. Indeed popular media have been misleading: the portrait of an algorithm
structurally designed with layers of artificial neurons is fondly likened to newborn babies who sift
through and organize information in the world. The danger lies in subjecting humans to the image of
the machine, resulting in claims that the human brain is nothing but a machine, which Katherine
Hayles pronounces as the essential view of the ‘posthuman’ (1999: 3). To be sure, science has not
come to the point of fully accounting for how the brain functions, let alone of drawing computational
designs modelled on the brain.
Inspired by the Helmholtz Machine, Baecker’s Mirage approximates the ‘sleeping’ phase
through an algorithm that captures the output of computer ‘dream’ or ‘hallucination’. The liveness of
the work is testified by a small display showing real-time geodynamo data, as they get fed into the

2
You Performance Research Vol.21, No.1 'On Sleep' (February 2016)
http://www.tandfonline.com/doi/abs/10.1080/13528165.2016.1138762

Helmholtz Machine for learning and ‘fantasizing’. The process through which the data become
patterns to be learnt and return in fantasized shapes in the ‘dream’ phase happens in the algorithm,
thus remaining opaque to the audience. Only at the end of the translation and mediation through
matrixed mechanical muscle wires and laser projection does the ‘dream’ manifest itself in visual form.
The correlation of the real-time data and the visual manifestation of the dream cannot be legibly read
(unlike in Google DeepDream), and that is precisely not the point. What is at stake is not to have
humans witness, as it were, the dream of ‘random’ patterns based on prior knowledge of what the
computer has ‘seen’. Much more crucial is to take the artwork along with the underlying algorithmic
process as a whole and ask, what kind of intelligible and affective quality this process entails.
In this light, how do we deal with terms like the ‘wake’ and ‘sleep’ of the Helmholtz Machine
without being scientifically naïve and ethically anthropocentric? Especially, how do we conceptualize
the ‘sleep’ phase, which seems to suggest new avenues of understanding thought and reason, albeit,
or all the more interestingly, not within a human registry? We will need a conceptual as well as
terminological shift that describes algorithmic behaviours without overriding it with cognitive and
rationalistic meaning infected by an anthropomorphic viewpoint. It is for this reason that we resort to
Alfred North Whitehead’s notion of prehension and speculative reason. At the same time, it is an
attempt to set an immanent ground for our human being’s participation in the world, alongside the
activities of algorithm, without drawing simple, representative parallels but in a way that accounts for
both in differentiations. We will return to this immanent ground for discussions relevant to
performance research later.

Prehension, Speculative Reason

The concept ‘prehension’, which Whitehead defines as ‘uncognitive apprehension’ (1967: 69), renders
legible the relational connections between people, things and their surroundings by highlighting the
operation of uncognitive ‘grasping’ -- a pre-epistemic, not necessarily knowledge-based operation of
relating to things and environments. Prehension ‘does not require explanation but must enable the
exhibition of the common feature of all situations in which something makes a difference for
something else’ (Stengers 2011: 147). So the Earth orbiting around the Sun prehends the Sun, the
apple falling from the tree prehends the ground, and we prehend the subtle layers of the physical and
cultural environment. In light of this, no entity is ever fixed or static, for ‘[a] new entity comes into
being by prehending other entities; every event is the prehension of other events’ (Shaviro 2009: 28).
Here, we see that both concrete things as well as events and affairs are on an equal footing
as prehension materials, leading to the affirmation that ‘thoughts in the concrete are made of the
same stuff as things are’ (James 1996: 37). And if we remember, for Whitehead, perception is the
‘cognition of prehensive unification’ or, more simply, ‘cognition of prehension’ (Stengers 2011: 147),
it follows that the image of thought as an anthropocentric, representational operation has to be
dethroned. That is to say, the intelligible as a mental operation is to be untied from a defined human

3
You Performance Research Vol.21, No.1 'On Sleep' (February 2016)
http://www.tandfonline.com/doi/abs/10.1080/13528165.2016.1138762

subject, while remaining open to refashioning and revision. Indeed, Whitehead’s philosophy amounts
‘to free[ing] our notions from participation in an epistemological theory of sense-perception’
(Whitehead 1978: 73).
It is on this notion of perception unbound from sense organs that we base the following
analysis of Whitehead’s speculative reason in the context of computation. The field has been
rigorously explored by Luciana Parisi, whose pioneering work has positively inspired my reading of
works of media art.
If physical prehension can be derived from the actual entities of the world, a ‘conceptual
prehension’ bears ‘no reference to particular actualities, or to any particular world’ (Whitehead 1978:
33). For Parisi, conceptual prehension is ‘the abstract, non-cognitive, and non-physical capture of
infinities’ (Parisi 2014: 173). This leads her to make a case for computational processing of data, or
what she calls ‘computational thinking’, as a form of pure conceptual prehension.
For Whitehead, the view of speculative reason does not conform to conditions set by
antecedent circumstances (Whitehead 1929: 84) and hence offers a state of being not determined by
a linear chain of cause and effect. Parisi extrapolates this into the computation world, and highlights
the proliferation of what cannot be calculated in the face of the infinite amount of data introduced
into computation. For her, the incomputable conditions computation today. It means that ‘a notion of
speculative reason is not concerned with the prediction of the future through the data of the past,
but with incomputable quantities of data that rather transform initial conditions’ (Parisi 2013: 9).
This reading of computational automation (which lies at the foundation of learning algorithm
and artificial intelligence) is radically different from the mechanical view of computation, according to
which a machine is made up of a discrete set of components and that each component entails a set of
step-by-step instructions that can be iterated ad infinitum. Thus to endow the computer with
speculative reason is to depart from the mechanical view of it as rule-obeying and subject only to
deterministic randomness.

Virtuality, Potentiality, Dream

Though on a limited scale, the set of algorithms at play in the Helmholtz Machine is imbued with
speculative reason, in the sense that its anticipatory function, or active and productive pre-response
function, shows exactly the conceptual prehension of input data. In the event of prehending input
data, the possible events where patterns correspond to the representation of data get reorganized. In
so far as the relation of the input data to the possible patterns that potentially inform them in the
background is not deterministic, they could be likened to the emergence relative to the virtual (as in
Deleuze) or the potential (as in Whitehead). The virtual or the potential is the immanent condition of
all emergences. Importantly, they do not ‘prefigure or predetermine the actualities that emerge from
it. Rather, it is the impelling force, or the principle, that allows each actual entity to appear (to
manifest itself) as something new’ (Shaviro 2009: 34). Creativity is rooted in the virtual and potential

4
You Performance Research Vol.21, No.1 'On Sleep' (February 2016)
http://www.tandfonline.com/doi/abs/10.1080/13528165.2016.1138762

field becoming transformed, articulated, elaborated, composed, contained and dissociated, over and
over again. In this sense, speculative reason implicated in algorithm is creative.
Yet it is a deceitful jump to equate the way we treat dream with virtuality, for if the state of
dreaming momentarily liberates virtuality and potentiality, the moment we wake up dissociates it
from actuality. We are once again on a plane of signs and interpretation; our attempt to make sense
and create meanings constitutes ‘a futile chase after [the dream’s] “shadows” and doubtful
appearances’ (Kerslake 2007: 178). This is why Guattari holds dreams as fundamentally
reterritorialization activities (178).

Mirage as Performance and Experience

What is the whole discussion of the Helmholtz Machine’s algorithmic process as speculative reason
good for? For one thing, it should make clear that the performative dimension of Baecker’s work --
computer ‘at sleep’ -- should not be approached in a representational way. That is to say, taking the
visual rendering as how a computer’s dream looks does a disservice to the true creativity of
algorithm.
On an experiential and affective level, it is clear that if it were not for creative transformation
and translation we would not be able to performatively approximate the virtual/potential in
algorithmic processes. In this sense, it is performative to make visible the otherwise opaque process
of a computational ‘dream’, yet strictly without subsuming it to representational frameworks. In
constrast, Google DeepDream functions more on the representational level, catering to our desire to
see the grotesqueness of distortion between ‘reality’ input and ‘dream’ output.
Baecker’s Mirage catches the ‘sleep phase’ of the computer in action without overlaying it
with meaning and interpretation, so that our relation to it is not discounted by the ‘shadows’ of
dream. It thus affords a speculative look into the virtuality and potentiality field that allows
emergence of differentiated forms -- in this case, geodynamo data patterns.
Thinking with Whitehead, we have come to appreciate that prehension of data in
algorithimic processes ‘involves the capture of abstract ideas not yet actualized yet nevertheless real’
(Parisi 2014: 174). The real is at once conceptual, and at once, given the particular functional
structure in which the algorithm is embedded, possible to be rendered into experience. In her
profound research on algorithmic architecture, Luciana Parisi attempts to clarify how abstract
algorithms construct the actuality of spatio-temporal experience that ‘do[es] not stem from the
directly lived’ (Parisi 2013: 22). She does so by tracing how the algorithmic behaviours determining
the shape of a physical entity in architecture are subject to randomness and hence dynamism. In the
same vein, Baecker’s Mirage renders abstract ideas of randomness into experienceable actualities
and hence suggests a way for us to approach the virtual and potential field of algorithms
performatively.

5
You Performance Research Vol.21, No.1 'On Sleep' (February 2016)
http://www.tandfonline.com/doi/abs/10.1080/13528165.2016.1138762

Performance and Immanence, or Infinity on Infinity


On an ontological level and meta-level, the relevance of Mirage for performance research lies in its
articulation of immanence in and of performance, or, in other words, the ontological requisite for
thinking, being and performing, both human and otherwise.
This immanent perspective first asks us not to view performance as outside of ourselves, but
as emergence differentiated in degree of our own participation in a situation. However, even when
the immanent perspective in performance is ‘inherently participatory’ (Cull 2013: 146) and suggests
that ‘participants are produced by processes of participation’ (147), it requires clear analysis on the
locations, as well as degrees of action and being acted upon. As Cull argues, ‘the extent to which we
act according to this immanence varies, the degree to which we acknowledge our “partness” or
partiality can and does differ’ (162). In live art or performance art, this may mean a heightened level
of awareness of the mutual becoming or affective enfolding in moments of participation, in tangible
or intangible manners. In the case of Baecker’s Mirage, taken as a performance, we have to consider
the virtual field bearing algorithmic behaviour (‘sleep’) as engendering and engendered by the
immanence of performance.
Now how do we draw a plane of immanence through Baecker’s Mirage, taken as a
performance with its visible and non-visible operations, to include the audience as well? It seems that
the conceptual bridge to construct, based on the immanent perspective of performance, is to view
the algorithmic process and the awareness of the audience both as articulated emergence of a given
moment. This does not mean to scientifically reconstruct on a molecular level if and how there are
interactions between the audience’s bodily presence and the sensing magnetometer. More crucially,
it situates the ‘parts’, both as observing audience and as processes in the artwork, in a ‘whole’ that is
about emergence from randomness and virtuality.
Ultimately, being in the world that is made up of manifold prehensions affirms the principle
of generative immanence. ‘Our powers to act and be acted on, to change and be changed, are
themselves the result of our immanent participation’ (Cull 2013: 162). In much the same way, Parisi
sees the algorithmic operations in the computer imbued with randomness and potentials as
‘immanence’, ‘whereby incomputable quantities of thought and affect infect computable procedures’
(Parisi 2013: 32).
If the computational process could be seen as prehending infinity and in turn modifying its
condition in infinite variations, or if, as Parisi puts it, programming becomes ‘the calculation of
complexity by complexity, chaos by chaos: an immanent doubling infinity or the infinity of the infinite’
(19), then registering the artwork with its infinitely generating and varying algorithmic feature as
performance and relaying the prehending experience as a general underlying structure in life is an
immanent doubling infinity. At this point we have come to find symmetry in algorithmic performance
and our participation in the artwork as a performance -- infinity on infinity.

6
You Performance Research Vol.21, No.1 'On Sleep' (February 2016)
http://www.tandfonline.com/doi/abs/10.1080/13528165.2016.1138762

Conclusion

Confronted with a generated landscape by a ‘sleeping computer’, we have taken a close look at
Baecker’s installation Mirage and examined at length the Helmholtz Machine algorithmic structure
that inspires it.
In analyzing the ‘dreams’ of a computer, this article tries to avoid the anthropomorphic
pitfall of drawing an easy allegory between the image of human dream and random information
output in computation. Instead, the process of computational ‘thinking’ is recast in light of the
Whiteheadian notions of prehension and speculative reason as basic operations in the world, within
and beyond the human cognitive registry. In so doing, it characterizes speculative reason and
emergence from virtuality/potentiality as a foundational process both in the algorithmic and in the
affectively registered human world, and indeed that’s where they intersect.
Taking Mirage as a performance means acknowledging both the algorithmic ‘dream’ of the
computer and the state of the participating audience as part of a larger whole. The article draws a
plane of immanence that accounts for both on the level of emergence out of virtuality and
potentiality. Not only could the audience performatively approach the computer at ‘sleep’, but they
are also part of a performativity outside of the work -- the generative, creative intelligence that
constitutes the world.
In light of this, we could perhaps sincerely ask ourselves, what does it mean to live in the age
of the algorithm?


References

Cull, Laura (2013) Theatre of Immanence: Deleuze and the ethics of performance, New York: Palgrave
Macmillan.

Hayles, N. Katherine (1999) How We Became Posthuman: Virtual bodies in cybernetics, literature and
informatics, Chicago: The University of Chicago Press.

James, William (1996) Essays in Radical Empiricism, Lincoln: University of Nebraska Press.

Kerslake, Christian (2007) Deleuze and the Unconscious, New York: Continuum.

Parisi, Luciana (2013) Contagious Architecture: Computation, aesthetics, and space, Cambridge and
London: MIT Press.

7
You Performance Research Vol.21, No.1 'On Sleep' (February 2016)
http://www.tandfonline.com/doi/abs/10.1080/13528165.2016.1138762

Parisi, Luciana (2014) ‘Digital automation and affect’, in Marie-Luise Angerer, Bernd Bösel and
Michaela Ott (eds) Timing of Affect: Epistemologies, aesthetics, politics, Zurich and Berlin: Diaphanes,
pp. 161--77.

Shaviro, Steven (2009) 
Without Criteria: Kant, Whitehead, Deleuze, and Aesthetics, Cambridge and
London: MIT Press.

Stengers, Isabelle (2011) Thinking with Whitehead: A free and wild creation of concepts, trans.
Michael Chase, Cambridge and London: Harvard University Press.

Whitehead, Alfred North (1929) The Function of Reason, Boston: Beacon Press.

Whitehead, Alfred North (1967 [1925]) Science and the Modern World, New York: The Free Press.

Whitehead, Alfred North (1978 [1929]) Process and Reality, New York: The Free Press.

You might also like