Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Educational Philosophy and Theory

ISSN: 0013-1857 (Print) 1469-5812 (Online) Journal homepage: https://www.tandfonline.com/loi/rept20

Platform ontologies, the AI crisis and the ability to


hack humans ‘An algorithm knows me better than
I know myself’

Michael A. Peters

To cite this article: Michael A. Peters (2020) Platform ontologies, the AI crisis and the ability to
hack humans ‘An algorithm knows me better than I know myself’, Educational Philosophy and
Theory, 52:6, 593-601, DOI: 10.1080/00131857.2019.1618227

To link to this article: https://doi.org/10.1080/00131857.2019.1618227

Published online: 27 May 2019.

Submit your article to this journal

Article views: 2587

View related articles

View Crossmark data

Citing articles: 2 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=rept20
EDUCATIONAL PHILOSOPHY AND THEORY
2020, VOL. 52, NO. 6, 593–601
https://doi.org/10.1080/00131857.2019.1618227

EDITORIAL

Platform ontologies, the AI crisis and the ability to hack


humans ‘An algorithm knows me better than I know myself’

Source: https://en.wikipedia.org/wiki/Angelus_Novus

A Klee painting named ‘Angelus Novus’ shows an angel looking as though he is about to move away from
something he is fixedly contemplating. His eyes are staring, his mouth is open, his wings are spread. This is
how one pictures the angel of history. His face is turned toward the past. Where we perceive a chain of
events, he sees one single catastrophe which keeps piling wreckage and hurls it in front of his feet. The
angel would like to stay, awaken the dead, and make whole what has been smashed. But a storm is
blowing in from Paradise; it has got caught in his wings with such a violence that the angel can no longer
close them. The storm irresistibly propels him into the future to which his back is turned, while the pile of
debris before him grows skyward. This storm is what we call progress.
–Walter Benjamin, On the Concept of History, https://www.sfu.ca/andrewf/CONCEPT2.html

Philosophy is declining as are the humanities in general. We are in a permanent state of crisis. A
storm is brewing. Catastrophes keep piling up. They form a bad ecology that roots and integra-
tes human beings into systems as part of the hive intelligence that harnesses big data and puts
its analysis to new commercial uses. The storm is an apt metaphor in the face of climate change
and species extinction. It links data, life and the planet. Meanwhile, philosophy is still busy
debating its age-old questions: freedom, agency, happiness, the good life. The philosophy of the
concept, as Deleuze once noted, has migrated to the market; and philosophy in the age of infor-
mation has gone off-shore to the AI engineer. The old categories are empty containers formu-
lated in the Enlightenment and form an ideal board game with no winners. They are not able to

ß 2019 Philosophy of Education Society of Australasia


594 EDITORIAL

describe or analyse the current fifth generation cybernetic system rationality that engulfs us all
changing the very conditions of existence.
An engineer at Google reflects on the question that ‘Google knows me better than I know
myself’, Robert Rossney reflects: ‘There’s a huge collection of signals that I’ve given to Google over
the course of my relationship with it as a signed-in user. (That’s distinct from my relationships with
it as an anonymous user and as an employee.) Barring some kind of system failure, those signals
go back very far, and there are a huge number of them.’ He continues: ‘I think that over the years,
as more and more ML classifiers get trained on ever increasingly large and rich data sets, the algo-
rithms will be able to make more and better predictions about what I might find useful. But know-
ing what kind of music news I want to read about (say) is very far away from knowing who I am.
My behavior as a consumer is not my identity.’ Finally, he muses:
The thing that I think is most interesting about this big heap of nearly-useless maybe-preferences that’s
scattered across dozens of bigtables around the world (my signals travel much faster and much farther
than I do myself) is that it will continue to exert a faint influence on ML classifiers that Google trains long
after I’ve ceased to contribute to it. I don’t expect my heirs, whoever they are, to exert themselves in
deleting my Google account when I die. So the data will still be there. The only signal it will be
accumulating will be the fact that it’s not accumulating signals anymore. But I’m sure it will still be
helping Google’s algorithms make marginally-better-than-they-otherwise-would decisions, in a bizarro-
world version of the continuation of my spiritual existence beyond the grave. https://www.quora.com/
Does-Google-know-us-better-than-we-know-ourselves

Nishant Gajbhe, reflecting on this Quora, puts the case very simply:
If you use Gmail, they of course also have all your e-mail messages. If you use Google Calendar, they
know all your schedule. There’s a pattern here: For all Google products (Hangouts, Music, Drive, etc.),
you can expect the same level of tracking: that is, pretty much anything they can track, they will … .
Essentially, if you allow them to, they’ll track pretty close to, well, everything you do on the Internet. In
fact, even if you tell them to stop tracking you, Google has been known to not really listen

These are mostly benign observations about tracking and the so-called ‘digital footprint’, the
data trail creates when anyone uses the internet including websites visited, emails sent and
received, registration of online services, internet travel, articles read, goods bought and the data
trail you leave behind unintentionally.
The power of tracking has been commented on by numerous commentators from various
perspectives. Few philosophers attempt the analysis as the digital somehow overflows traditional
ontological and epistemological categories. Many of the commentators are good at raising philo-
sophical questions. Carmichael (2014) writing for The Atlantic suggests ‘Google Knows You Better
Than You Know Yourself’. He makes the point ‘Predictive analysis combs through calendars and
search histories—and gets in the way of routine self-deception.’
Anyone who’s ever cleared a browser history to maintain self-respect, or been appalled by a song that
some predictive streaming music service suggests (then … liked it), has faced technology’s ability to throw
us back at ourselves. And even with Now, most revelations feel small. (Carmichael 2014)

He is talking about Google’s Now, advertised as ‘The right information at just the right time’ a
Google app that provides ‘helpful cards with information that you need throughout your day,
before you even ask’ (https://www.google.co.uk/landing/now/).
Jo Evans writes on ‘When Facebook Knows You Better Than You Know Yourself’:
Every time you log in to Facebook, every time you click on your News Feed, every time you Like a photo,
every time you send anything via Messenger, you add another data point to the galaxy they already have
regarding you and your behavior. That, in turn, is a tiny, insignificant dot within their vast universe of
information about their billion-plus users.

It is probable that Facebook boasts the broadest, deepest, and most comprehensive dataset of human
information, interests, and activity ever collected. (Only the NSA knows for sure.) Google probably has more
raw data, between Android and searches–but the data they collect is (mostly) much less personal. Of all the
EDUCATIONAL PHILOSOPHY AND THEORY 595

Stacks, I think it’s fair to say, Facebook almost certainly knows you best. https://techcrunch.com/2015/10/24/
when-facebook-knows-you-better-than-you-know-yourself/

He goes on to provide examples of how ‘your phone can tell whether you’re depressed.
Algorithms are already being used to judge our character, and can determine whether your rela-
tionship is in trouble based on your collective social graph.’
‘Know thyself’ (gno thi seauton) was once an ancient Greek aphorism that was one of more
than five hundred Delphic maxims of ancient practical wisdom beginning the Western tradition.
‘Know thyself’ was carved into the anticum of the portico at the Temple of Apollo, first described
by Pausanias, the Greek traveller and geographer of the second century AD, in his Hellados
Periegesis (Description of Greece), mostly based on his own first-hand visits and observations. The
Delphic aphorisms were attributed to Apollo himself, and later, Joannes Stobaeus, a fifth century
scholar who made collections of extracts from Greek from Greek authors in two related volumes
entitled Extracts and Anthology, attributed the Delphic principles more believably to the Seven
Sages of Greece. The seven sages included Thales of Miletus, who is also attributed the aphorism
‘Know Thyself’ and first mentioned in Plato’s Protagoras (342e-343b). Socrates refers to the
Ancient wisdom of the Sages, inspired by Spartan education, that is written at Apollo’s shrine at
Delphi in a kind of ‘laconic brevity’:
[342d] … In those two states there are not only men but women also who pride themselves on their
education; and you can tell that what I say is true and that the Spartans have the best education in
philosophy and argument by this: if you choose to consort with the meanest of Spartans,[342e] at first you
will find him making a poor show in the conversation; but soon, at some point or other in the discussion,
he gets home with a notable remark, short and compressed—a deadly shot that makes his interlocutor
seem like a helpless child. Hence this very truth has been observed by certain persons both in our day and
in former times—that the Spartan cult is much more the pursuit of wisdom than of athletics; for they know
that a man’s ability [343a] to utter such remarks is to be ascribed to his perfect education. Such men were
Thales of Miletus, Pittacus of Mytilene, Bias of Priene, Solon of our city, Cleobulus of Lindus, Myson of Chen,
and, last of the traditional seven, Chilon of Sparta. All these were enthusiasts, lovers and disciples of the
Spartan culture; and you can recognize that character in their wisdom by the short, memorable sayings that
fell from each of them they assembled together [343b] and dedicated these as the first-fruits of their lore to
Apollo in his Delphic temple, inscribing there those maxims which are on every tongue—‘Know thyself’ and
‘Nothing overmuch.’ To what intent do I say this? To show how the ancient philosophy had this style of
laconic brevity; and so it was that the saying of Pittacus was privately handed about with high approbation
among the sages—that it is hard to be good.
Plato. Plato in Twelve Volumes, Vol. 3 translated by W.R.M. Lamb. Cambridge, MA: Harvard University Press/
London: William Heinemann Ltd. 1967. http://www.perseus.tufts.edu/hopper/text?doc¼Perseus%3Atext%
3A1999.01.0178%3Atext%3DProt.%3Asection%3D342e

Socrates refers to a time before the institutionalization of philosophy when the Pythia
was the high priestess of the Temple of Apollo at Delphi established in the 8th century BC
and was known as the Oracle, although the oracle may have been present in some form
from 1400 BC. The Pythia (sometimes as many as three women) as the Delphic Oracle,
inspired by Apollo, was the most powerful and authoritative oracle in classical Greece – an
influence lasting some four centuries into the 4th century BC, consulted and mentioned by
many classical sources.
‘Know thyself’ along with take ‘care of the self’ is a phrase expressed in the form of an aphor-
ism (aphorismos), a Greek literary form adopted by Hippocrates, and also associated with the wis-
dom literature in Greek, Christian, Islamic and Hindu religions and later used by Rochefoucauld,
Pascal, Nietzsche and Wittgenstein, among others. These two aphorisms were the starting point
for Foucault (2005) in his The Hemeneutics of the Subject, a course of twelve lecture he gave at
the College de France in 1982, devoted to studying a set of practices in late Antiquity concerned
with what the Greek’s called epimeleia heauto based on the principle that one should ‘take care
of the self,’ understood in relation to gnothi seauton or ‘Know thyself’ and the theme of self-
knowledge. This was a set of principles, exercises and practices that became a form of life – the
596 EDITORIAL

care of the soul – that Socrates made his mission and encouraged others to undergo as an
ongoing work of transformation of the self, involving a duty, a fundamental obligation and a set
of techniques and spiritual exercises. Socrates was to intone ‘The unexamined life is not
worth living’ and forms of self-examination that emphasized the self, conceived in terms of
a juridico-political model of being sovereign over oneself, of exercising control over oneself and
being fully independent, became the very basis for the tradition that we know as the philosophy
of the subject that informs the Western tradition and Western institutions.
If we are to believe the commentators this tradition is now moribund or passe. It has been
surpassed in the digital era where human beings are constituted as a series of datapoints based
on our searches, our purchases, and what we read. For some such as Seth Stephens-Davidowitz
(2017), a former Google data scientist, the data may offer us as a society a better way to truly
understand who people really are. This is the subject of his book Everybody Lies: Big Data, New
Data, and What the Internet Can Tell Us About Who We Really Are. In a related interview, he says:
‘[Google Trends] is … probably the most important data set ever collected on the human
psyche, and definitely a really important tool for researchers to focus on’ and ‘We tend to make
horrible predictions about what we’re going to do in the future. Almost all of us are way too
over-optimistic. I think data can ground us much better.’1
Are we really to believe that we have catapulted out of the philosophical tradition? Are the
concepts, categories, and language outmoded and unable to capture what going on? Is ‘Know
Thyself’ just another philosophical happiness app new from Oracle?
This question has been answered in the negative by Yuval Noah Harari (2018) in 21
Lessons for the 21st Century. He puts the case in a paper in Nature ‘Reboot for the AI revolu-
tion’2 (Harari, 2017a)
The automation revolution is emerging from the confluence of two scientific tidal waves. Computer
scientists are developing artificial intelligence (AI) algorithms that can learn, analyse massive amounts of
data and recognize patterns with superhuman efficiency. At the same time, biologists and social scientists
are deciphering human emotions, new jobs. In particular, as routine jobs are automated, opportunities for
new nonroutine jobs will mushroom. For example, general physicians who focus on diagnosing known
diseases and administering familiar treatments will probably be replaced by AI doctors. Precisely because of
that, there will be more money to pay human experts to do groundbreaking medical research, develop new
medications and pioneer innovative surgical techniques. This calls for economic entrepreneurship and legal
dexterity. Above all, it necessitates a revolution in education (p. 324–5).

He argues we must develop new systems and institutions and puts his money on lifelong
education and universal basic income (ho-hum). In a video posted on 25 April, 2019 ‘Fei-Fei Li &
Yuval Noah Harari in Conversation – The Coming AI Upheaval’3 (Stanford, Ethics in Society) he is
more interesting in summing up his perspective: philosophy has given way to engineering and
we can encapsulate the crisis in an equation D  C  E ¼ R which means ‘Dialogical Knowledge
times Computing Power times Data ¼ the Ability to Hack Humans’. But as he argues, the link is
not yet complete. When biological knowledge is linked to AI then we will be able to create an
algorithm that understands me better than I understand myself. Harari says ‘We are now facing
not just a technological crisis but a philosophical crisis’.4 And continues:
Because we have built our society, certainly liberal democracy with elections and the free market and so
forth, on philosophical ideas from the 18th century which are simply incompatible not just with the
scientific findings of the 21st century but above all with the technology we now have at our disposal. Our
society is built on the ideas that the voter knows best, that the customer is always right, that ultimate
authority is, as Tristan said, is with the feelings of human beings and this assumes that human feelings and
human choices are these sacred arena which cannot be hacked, which cannot be manipulated. Ultimately,
my choices, my desires reflect my free will and nobody can access that or touch that. And this was never
true. But we didn’t pay a very high cost for believing in this myth in the 19th and 20th century because
nobody had a technology to actually do it. Now, people— some people—corporations, governments are
gaming the technology to hack human beings. Maybe the most important fact about living in the 21st
century is that we are now hackable animals.
EDUCATIONAL PHILOSOPHY AND THEORY 597

So, Harari explains: ‘To hack a human being is to understand what’s happening inside you on
the level of the body, of the brain, of the mind, so that you can predict what people will do.’ I’ll
leave you to read the rest of the interview.
Certainly, we can no longer be optimistic about the new age of data or ‘dataism’. It has long
proved its susceptibility to control, to manipulation, to closet data science and to nefarious use
in a cavalier fashion by CEOs and data specialists who should know better. They probably even
completed a course on ethics at some point in their lives. Now we live in the digital shadows
of the scandal of Facebook-Cambridge Analytica that combined data mining and analysis to
provide analysis for Ted Cruz’s and Donald Trump’s campaigns in 2015 and 2016, respectively.
The political consulting firm also helped to mastermind Leave.EU. Cambridge Analytica (CA)
acquired and used personal data from Facebook users – some 87 million – to provide psycho-
graphic data to micro-target voting audiences.5 CA was founded by Steve Bannon and Robert
Mercer in 2015 and its methods were based on the academic work of Michal Kosinski who had
joined Psychometrics Centre at Cambridge University in 2008.6 Carol Cadwalladr, a writer for
The Observer in Britain, spent over two years probing how the tech billionaires had broken
democracy and her Ted talk ‘Facebook’s role in Brexit – and the threat to democracy’ based
on these investigations went viral.7 Her assessment of Facebook’s intervention in the Brexit
vote is devastating especially its participation in ‘electoral fraud’ and Mark Zukerberg’s refusal
to give evidence before the British parliament.
What’s missing from Harari is the political economy of digital capitalism. Zuboff’s (2019) new
book The Age of Surveillance Capital provides the missing element in Harari’s analysis. In ‘The
goal is to automate us’: welcome to the age of surveillance capitalism’ Naughton (2019) of The
Guardian reviews Zuboff’s book and asks her a series of questions.8 He begins with stating her
central thesis in her own words:
Surveillance capitalism … unilaterally claims human experience as free raw material for translation into
behavioural data. Although some of these data are applied to service improvement, the rest are declared as
a proprietary behavioural surplus, fed into advanced manufacturing processes known as ‘machine
intelligence’, and fabricated into prediction products that anticipate what you will do now, soon, and later.
Finally, these prediction products are traded in a new kind of marketplace that I call behavioural futures
markets. Surveillance capitalists have grown immensely wealthy from these trading operations, for many
companies are willing to lay bets on our future behaviour.

Naughton (2019) focuses on ‘the arrogant appropriation of users’ behavioural data – viewed
as a free resource’ in a lawless territory where Google could digitise whatever it wanted. Zuboff
is refreshingly non-technical in asserting:
Surveillance capitalism is a human creation. It lives in history, not in technological inevitability. It was
pioneered and elaborated through trial and error at Google in much the same way that the Ford Motor
Company discovered the new economics of mass production or General Motors discovered the logic of
managerial capitalism.
Surveillance capitalism was invented around 2001 as the solution to financial emergency in the teeth of the
dotcom bust when the fledgling company faced the loss of investor confidence. As investor pressure
mounted, Google’s leaders abandoned their declared antipathy toward advertising. Instead they decided to
boost ad revenue by using their exclusive access to user data logs (once known as “data exhaust”) in
combination with their already substantial analytical capabilities and computational power, to generate
predictions of user click-through rates, taken as a signal of an ad’s relevance.
Operationally this meant that Google would both repurpose its growing cache of behavioural data, now put
to work as a behavioural data surplus, and develop methods to aggressively seek new sources of
this surplus.

She adds further on in the interview:


Nearly every product or service that begins with the word “smart” or “personalised”, every internet-enabled
device, every “digital assistant”, is simply a supply-chain interface for the unobstructed flow of behavioural
data on its way to predicting our futures in a surveillance economy.
598 EDITORIAL

In response to Naughton’s question: ‘What are the implications for democracy?’ she expounds,
and I quote this in full:
During the past two decades surveillance capitalists have had a pretty free run, with hardly any interference
from laws and regulations. Democracy has slept while surveillance capitalists amassed unprecedented
concentrations of knowledge and power. [Ed. Come home Foucault, all is forgiven]. These dangerous
asymmetries are institutionalised in their monopolies of data science, their dominance of machine
intelligence, which is surveillance capitalism’s “means of production”, their ecosystems of suppliers and
customers, their lucrative prediction markets, their ability to shape the behaviour of individuals and
populations, their ownership and control of our channels for social participation, and their vast capital
reserves. We enter the 21st century marked by this stark inequality in the division of learning: they know
more about us than we know about ourselves or than we know about them. These new forms of social
inequality are inherently antidemocratic.

She forcefully describes the antidemocratic and anti-egalitarian tendencies of the juggernaut
of surveillance capitalism and the way it differs from industrial capitalism to ensure an
‘unobstructed flow of behavioural data to feed markets that are about us but not for us.’ It is a
powerful story and one that needs to be told as an antidote to those who extol ‘dataism’, an
ideology that, as Harari (2017b: 428) predicts, protects information flows as the supreme value
and aims at interpreting the human species as single data processing system.
This is how Harari sums up the philosophical issues in an article for the Financial Times in
2016 under the heading ‘Yuval Noah Harari on big data, Google and the end of free will’:
For thousands of years humans believed that authority came from the gods. Then, during the modern
era, humanism gradually shifted authority from deities to people. Jean-Jacques Rousseau summed up this
revolution in Emile, his 1762 treatise on education. When looking for the rules of conduct in life,
Rousseau found them “in the depths of my heart, traced by nature in characters which nothing can
efface. I need only consult myself with regard to what I wish to do; what I feel to be good is good, what I
feel to be bad is bad.” Humanist thinkers such as Rousseau convinced us that our own feelings and
desires were the ultimate source of meaning, and that our free will was, therefore, the highest authority
of all. Now, a fresh shift is taking place. Just as divine authority was legitimised by religious mythologies,
and human authority was legitimised by humanist ideologies, so high-tech gurus and Silicon Valley
prophets are creating a new universal narrative that legitimises the authority of algorithms and Big Data.
This novel creed may be called “Dataism”. In its extreme form, proponents of the Dataist worldview
perceive the entire universe as a flow of data, see organisms as little more than biochemical algorithms
and believe that humanity’s cosmic vocation is to create an all-encompassing data-processing system and
then to merge into it.9

As he argues ‘Dataists believe in the invisible hand of the dataflow. As the global data-proc-
essing system becomes all-knowing and all-powerful, so connecting to the system becomes the
source of all meaning.’ In the Dadaist society, freewill and humanism melts away as biochemical
algorithms and their manipulation assert themselves. The shift from humanism to dataism, as
Prenille Tranberg suggests summarising the last chapter of Homo Deus,
the human body is an algorithm. There are two kinds of algorithms; the electronic and the biochemical (the
organism), and it is just a question of time, before the electronic outcompetes the latter, as the human
brain has no capacity compared to the electronic.

The results are grim as are the possibilities. Within this universal data semiotic we can note
the rise of what I call ‘platform ontologies’. They are apps, more than 1500 of them, to make
you happy, fit, slim, healthy, to boost your well-being, give you motivation etc.:

 The 41 Best Health and Fitness Apps, https://greatist.com/fitness/best-health-fitness-apps


 8 Popular Health and Fitness Apps for 2018, https://www.canstar.com.au/health-insurance/
best-health-fitness-apps/
 11 of the best wellness apps to keep your New Year’s resolutions on track, https://
www.hellomagazine.com/healthandbeauty/health-and-fitness/2019010866354/wellness-apps-
to-download/
EDUCATIONAL PHILOSOPHY AND THEORY 599

 18 Best Health and Fitness Apps of 2018, https://www.active.com/fitness/articles/18-best-


health-and-fitness-apps-of-2018
 The Best 11 Apps to Track Your Happiness in 2019, https://positiveroutines.com/track-your-
happiness-apps/

Here’s the pitch:


We bet you’ve heard how technology can be hazardous to your mental health, but there’s more to the
story than that. Your tech, especially your phone, can be used for the good, and these apps to track your
happiness, and all of your other moods, fall under that category. If you’re familiar with behavior change or
habit-building, you probably know that tracking is a top tip for making change stick. So these happiness
apps are applying that science to your moods, which means you can watch for trends, see what most
affects them, and make changes for the better. The trick is to find out what makes you happy and make
sure you get more of that.

From the same advertisement:


Track Your Happiness is an app from Harvard University researchers that sends you questions throughout
the day about what you’re doing and feeling at that moment … Happify can help you practice … shifting
your mindset to a happier one … . My Gratitude Journal, an app that helps you track five things you’re
grateful for every day … Headspace guides you through a variety of meditations …

The health and fitness apps archive biometric data and utilize stored feedback biological data.
They can give you the graphed history of your body on a daily basis. The mental health apps
can chart and predict your moods and tell you how to reach the state of happiness. The algo-
rithms know more about us than we know ourselves. No longer ‘Know Thyself’ but ‘Know Thy
Apps’. Let the digital apps regulate your self, your body, your motivation, your sleep, your
thoughts, etc.
Platform ontologies in the age of dataism. Alongside Facebook and Google, these onto-plat-
form apps provide the digital answer to Know Thyself – it’s an accumulation of datapoints
arranged in the archive that records every minute variation and reconstructs it as the engine of
self-regulation based on the value of efficiency. Increasingly, learning apps are part of this set.
Vincent (2018) notes:
Big data is only just beginning to make inroads in education … Companies like BridgeU use algorithms to
help locate universities and courses based on student preferences … big data is behind standardised testing
programmes like those administered by CEM and the success of many new pedagogical applications that aim
to help to apply scientific findings of learning to course material like the textbooks developed by Kognity.

But he sees the possibility of liberal humanism and Dadaism existing together for the benefits
of schools. He doesn’t really appreciate the implications of the argument although there is some-
thing to be said about liberating the flow of data and making everything publicly available. The
problem is if we were to do this the flood of information would be uncontrollable and unusable.
Certainly, the release of all journal research papers currently tied up behind pay walls would be a
major improvement but that in itself while assisting public good science especially in the global
South would not of itself lead to greatest scientific revolution in the history of humanity. We
already suffer from too much data, and from misinformation and disinformation. So we need the
organized release of data, information and knowledge; and, we need system of validation, fact-
checking and general evaluation. David Brook’s (2013) ‘The Philosophy of Data’ made the case for
dataism: ‘it’s really good at exposing when our intuitive view of reality is wrong’; and, ‘data can illu-
minate patterns of behavior we haven’t yet noticed’. Allowing the free flow of data may result in
increases in quality of life and in the enhancement and protection of the body (through biosensors
and regulators). If all our biodata was freely aggregated medical science could make huge progress.
But the free flow of data might also lead to manipulation and control as we have seen with the
example of Cambridge Analytica and Facebook. There are issues of rights to privacy and data own-
ership at stake and also problems with political manipulation and fake news in democratic states.
600 EDITORIAL

One wonders with profusion and development of education and learning apps where totaliz-
ing system that aggregate student assessment data without rights of appeal or independent
audit whether the student benefits from being known better by an algorithmic system in terms
of academic achievement than themselves. To what extent does it contradict the student’s
autonomy and independent judgement in themselves? Is autonomy even a possible value in
this system.
Mehul Rajput (2018) gives us some idea of how big the elearning market is:
According to Orbis research, the global eLearning market worldwide is set to surpass USD 275 billion value
by 2022. The market size was estimated over USD 165.21 billion in 2015 and is predicted to grow at over
7.5% CAGR during the 2015-22 period. The major factors promoting the eLearning market includes:

 Low cost
 Easy accessibility
 A shift towards flexible education solutions
 Increased effectiveness by animated learning
 Increased internet penetration: Statistics show that the number of internet users ranges around 3.2 bil-
lion, which makes 43 percent of the global population
 A surge in the number of smartphones: currently owned by 36 percent of the world’s population
How Big Is The eLearning Market And The Role Of Mobile Apps?, https://elearningindustry.
com/big-elearning-market-role-mobile-apps
He makes the case for mobile learning apps.
The advent of mobile apps has made learning more engaging and interesting. Now, you can find a mobile
application for almost any work, from shopping to banking to education. With the help of mobile apps, you
can start eLearning literally anytime and anywhere. The fact that most of them can work in offline mode
has made them more retentive and appealing to the public (Ibid.)

See also https://www.mindinventory.com/blog/educational-app-development-features-cost-


estimation/?utm_campaign¼elearningindustry.com&utm_source¼%2Fbig-elearning-market-role-
mobile-apps&utm_medium¼link
I’m not convinced. Rajput’s (2018) work is more about the growth of the market and his com-
pany Mindinventory than a critical discussion of any educational benefits. The elearning mobile
app revolution might have just begun but I’m a worried man for the reasons laid out by Harari
and Zuboff, and others. Will the storm of progress spell the demise of philosophy and the ancient
link between philosophy and pedagogy? Are we destined to evolve into bioinformational beings
that become more and more integrated into a single evolving data processing system? Once the
link between bioinformational technologies and cognitive sciences is made at the nano-level, then
Harari’s fears will be realised. Then, surely, corporations and governments will be able to hack
human beings. Goodbye humanism as an educational and pedagogical philosophy.

Notes
1. https://www.theguardian.com/technology/2017/jul/09/everybody-lies-how-google-reveals-darkest-secrets-seth-
stephens-davidowitz.
2. https://www.nature.com/polopoly_fs/1.22826!/menu/main/topColumns/topLeftColumn/pdf/550324a.pdf.
3. https://www.ynharari.com/fei-fei-li-yuval-noah-harari-coming-ai-upheaval/.
4. When Tech Knows You Better Than You Know Yourself, Thomson (2018) Interview with Yuval Noah Harari and
Tristan Harris, Video and transcript, Wired, https://www.wired.com/story/artificial-intelligence-yuval-noah-harari-
tristan-harris/.
5. See https://www.theguardian.com/news/series/cambridge-analytica-files.
6. See his website at https://www.michalkosinski.com/ and also his research that includes Links between facial
features and psychological traits, ‘Private traits and attributes are predictable from digital records of human
behavior’ (2013), and Psychological targeting as an effective approach to digital mass persuasion’ (2017).
7. https://www.ted.com/talks/carole_cadwalladr_facebook_s_role_in_brexit_and_the_threat_to_
democracy?language¼en.
EDUCATIONAL PHILOSOPHY AND THEORY 601

8. https://www.theguardian.com/technology/2019/jan/20/shoshana-zuboff-age-of-surveillance-capitalism-
google-facebook.
9. https://www.ft.com/content/50bb4830-6a4c-11e6-ae5b-a7cc5dd5a28c.

ORCID
Michael A. Peters http://orcid.org/0000-0002-1482-2975

References
Carmichael, J. (2014). Google knows you better than you know yourself. The Atlantic. Retrieved from https://www.
theatlantic.com/technology/archive/2014/08/google-knows-you-better-than-you-know-yourself/378608/
Foucault, M. (2005). The Hemeneutics of the Subject: Lectures At The College De France, 1981–82. (2001). by
Editions de Seuil/Gallimard. Edition established, under the direction of Francois Ewald and Allessandro Fontana,
by Frederic Gros. Translation# Graham Burchell, 2005. Introduction # Arnold I. Davidson, 2005. New York:
Palgrave Macmillan.
Harari, Y. N. (2017a). Reboot for the AI revolution. Nature, 550(7676), 324–327. doi:10.1038/550324a
Harari, Y. N. (2017b). Homo Deus: A brief history of tomorrow. UK: Vintage Penguin Random House.
Harari, Y. N. (2018). 21 lessons for the 21st century. London: Jonathan Cape.
Naughton, J. (2019). ‘The goal is to automate us’: Welcome to the age of surveillance capitalism. The Guardian.
Retrieved from https://www.theguardian.com/technology/2019/jan/20/shoshana-zuboff-age-of-surveillance-capit-
alism-google-facebook
Plato. (1967). Plato in twelve volumes. Vol. 3 translated by W.R.M. Lamb. Cambridge, MA, Harvard University Press;
London, William Heinemann Ltd.
Rajput, M. (2018). How big is the eLearning market and the role of mobile apps?. Retrieved from https://elearnin-
gindustry.com/big-elearning-market-role-mobile-apps
Seth Stephens-Davidowitz. (2017). Everybody lies: Big data, new data, and what the internet can tell us about who we
really are. New York, NY: Dey Street Books.
Zuboff, S. (2019). The age of surveillance capital: The fight for a human future at the new frontier of power. New York,
NY: Public Affairs.

Michael A. Peters
Beijing Normal University, Beijing, China
mpeters@bnu.edu.cn

You might also like