Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 45

The Social and Political

Impact of the Rise of the


Tech Giants
SCO103 New Empires: The Reach and
Frontiers of the Tech Sector
Social Stability in the Age
of the Tech Giants
Technology: Made in our Image
Class Activity 5:
► Do you what/how much the tech giants know about you? Take a look at the following
online resources and list the types of data that the tech giants have been gathering
from the user activity on their products and platforms. Are you comfortable giving
up your data in exchange for ‘free’ services?
► Get into your groups and spend about 15 min reflecting on these questions.
1. The Wired guide to your personal data (and who is using it)
https://www.wired.com/story/wired-guide-personal-data-collection/
2. How Facebook outs sex workers
https://gizmodo.com/how-facebook-outs-sex-workers-1818861596
3. Facebook figured out my family secrets, and it won’t tell me how
https://gizmodo.com/facebook-figured-out-my-family-secrets-and-it-wont-tel-1797696
163
4. The Visual Capitalist, ‘Here’s What the Big Tech Companies Know about You’
https://www.visualcapitalist.com/heres-what-the-big-tech-companies-know-about-you
/
5. The Visual Capitalist, ‘What does Google know about you?’
https://www.visualcapitalist.com/what-does-google-know-about-you/
Google’s Autocomplete Function
► ‘whites are’21
► ‘Latinas are’22
► ‘Canadians are’23
► ‘Chinese students are’
► Google no longer autocompletes certain sensitive search terms. 24
► ‘Asians are’25
► ‘Blacks are’26
► Is this the right way to deal with bias?
21
Jonathan Cohn, ‘Google’s algorithms discriminate against women and people of colour’, The Conversation, 25 April 2019. Available online at
https://theconversation.com/googles-algorithms-discriminate-against-women-and-people-of-colour-112516 (Last accessed on 12 July 2019).

22
Ibid.
23
Ibid.
24
Ibid.
25
Ibid.
26
Ibid.
Google Search Bias
► ‘CEO’16
► ‘woman’17
► ‘girl’18
► ‘Latinas and Asian women’19
► Google’s autocomplete function used to generate offensive and biased
results.20

16
Daniel Cossins, ‘Discriminating algorithms: 5 Times AI showed prejudice’, New Scientist, 27 April 2018. Available online at
https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/ (Last accessed on 12 July 2019).
17
Jonathan Cohn, ‘Google’s algorithms discriminate against women and people of colour’, The Conversation, 25 April 2019. Available online at
https://theconversation.com/googles-algorithms-discriminate-against-women-and-people-of-colour-112516 (Last accessed on 12 July 2019).

18
Ibid.
19
Ibid.
20
Ibid.
Biased Data Biased AI
► AI programmes do not always make impartial judgements/decisions because
they are trained on biased human/real-world data.1
► Recruitment2
► Healthcare3
► Criminal justice system4
► Internet Search5
► AI bias could worsen inequality6

1
Brian Resnick, ‘Yes, artificial intelligence can be racist’, Vox, 24 January 2019. Available online at
https://www.vox.com/science-and-health/2019/1/23/18194717/alexandria-ocasio-cortez-ai-bias (Last accessed on 23 June 2019).
2
Ibid.
3
Ibid.
4
Ibid.
5
Jonathan Cohn, ‘Google’s algorithms discriminate against women and people of colour’, The Conversation, 25 April 2019. Available online at
https://theconversation.com/googles-algorithms-discriminate-against-women-and-people-of-colour-112516 (Last accessed on 12 July 2019).
6
Ibid.
Class Activity 6: Google Auto-complete
► Enter the incomplete search terms in the list below (Batches A-D) into Google Search
and examine the results of Google’s Auto-complete function.
► Type the results of the auto-complete function into the Zoom chat box.
► Some of the results may contain highly offensive terms and phrases. If the results are
highly offensive, do not type it in the Zoom chat box. Instead, simply type the
following in the Zoom chat box: Search term ‘XXXX’ produces an offensive auto-
complete result.
► List of incomplete search terms:
► Batch A: Climate change is; Climate change is not; Climate change should; Climate
change does
► Batch B: Foreigners will; Refugees should; Why are Japanese; Why are Chinese;
Why are Asian; Chinese students are; Indians have; Pakistanis are; Africans are
► Batch C: Why are Singaporeans; Singaporeans are; Singaporeans have; Singaporeans
cannot
► Batch D: Wives should; Why wives should; Why do wives
Class Activity 7: Google Images
► Enter the following search terms into Google Images.
► Describe the Google Images search results in the Zoom chat box. Do you notice any biased patterns in the search
results?
► List of search terms:
► ‘CEO’; CEO-style; CEO-look
► ‘Professor’; Professor style; College Professor style;
► ‘Teacher’; Teacher look
► ‘nurse practitioner’; nurse look
► ‘telemarketers’
► ‘receptionist’
► ‘chef’
► shopping
► Search terms in quotation marks are taken from (1) Jennifer Langston, ‘Who’s a CEO? Google image results can shift gender biases’, UW
News, 9 April 2015. Available online at
https://www.washington.edu/news/2015/04/09/whos-a-ceo-google-image-results-can-shift-gender-biases/. Last accessed on 24 September
2020. (2) and Oliver W Duke Williams, ‘How well do Google Image results represent reality?’, UCL Centre for Digital Humanities (UCLDH
Blog), 23 June 2015. Available online at https://blogs.ucl.ac.uk/dh/2015/06/23/how-well-do-google-image-results-represent-reality/. Last
accessed on 24 September 2020.
Class Activity 6 & 7: Follow-up (Part 1)

► Perhaps you have noticed offensive auto-complete results and biased patterns
in your Google Images search results.
► Why were these offensive/biased results generated?
► Type your response in the Zoom chat box.
Class Activity 6 & 7: Follow-up (Part 2) - Biased Data Biased AI

► AI programmes do not always make impartial judgements/decisions because they


are trained on biased human/real-world data.
► ‘But machine learning has a dark side. If not used properly, it can make decisions
that perpetuate the racial biases that exist in society. It’s not because the
computers are racist. It’s because they learn by looking at the world as the way it
is, not as it ought to be.’
- Brian Resnick, ‘Yes, artificial intelligence can be racist’, Vox, 24 Jan
2019. Available online at
https://www.vox.com/science-and-health/2019/1/23/18194717/alexandria-ocasio-cortez-ai-b
ias
. Last accessed on 24 September 2020.
► ‘There is a saying in computer science: garbage in, garbage out. When we feed
machines data that reflect out prejudices, they mimic them – from antisemitic
chatbots to racially biased software. Does a horrifying future await people forced to
live at the mercy of algorithms?’
– Stephen Buranyi, ‘Rise of the racist robots – how AI is learning all our worst
impulses’, The Guardian, 8 Aug 2017. Available online at
https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-le
arning-all-our-worst-impulses
. Last accessed on 24 September 2020.
Class Activity 6 & 7: Follow-up (Part 3)
► Should we allow the search algorithm to mirror society (or what’s popular)?
► Or should we tweak the algorithm to ensure that ‘sanitized’ results will always
be generated? Should Google give people what they need instead of what
they want?
► For example, if there are a lot of men who believe that ‘wives should be
subordinate to their husbands’, should we tweak the search algorithm to
ensure that such views never surface in the auto-complete results?
► Or, assuming that most engineers are men, should we tweak the search
algorithm for Google Images to ensure that the results for ‘engineers’ will
always contain a balanced mix of men and women?
► Type your response in the Zoom chat box or unmute yourself and share your
thoughts with the class.
► How has Google responded to accusations of bias/racism/sexism in its search
results?
Background: Data / Theory

...and “Meta-Data”, too!


COMPAS: Correctional Offender Management Profiling for Alternative Sanctions7

► COMPAS is used in the US criminal justice system to make predictions


about recidivism (the chances that a criminal will re-offend)
► ProPublica study on COMPAS:
(1) Larson, Mattu, Kirchner and Angwin: ‘Black defendants were often
predicted to be at a higher risk of recidivism than they actually
were.’8
(2) Larson, Mattu, Kirchner and Angwin: ‘White defendants were often
predicted to be less risky than they were.’9
► Why does COMPAS discriminate against African-Americans?
► (article here)

7
This case study is attributed to ProPublica. See Jeff Larson, Surya Mattu, Lauren Kirchner and Julia Angwin, ‘How we analyzed the COMPAS Recidivism
Algorithm’, ProPublica, 23 May 2016. Available online at https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm (Last
accessed on 12 July 2019).
8
Ibid.
9
Ibid.
PredPol – Predictive Policing in the United States
► What is the goal of predictive policing?
► Predictive policing programmes are trained on ‘police-recorded data’.10
► Are ‘police-recorded data’ biased? Lum and Isaac: Yes, they reflect racial
prejudices/assumptions.11
► Drug crime – Drug use and Drug arrests (Oakland, California case study by
Kristian Lum and William Isaac)12
► What are the consequences for predictive policing? Lum and Isaac: ‘The
locations that are flagged for targeted policing are those that were … already
over-represented in the historical police data.’13

10
Kristian Lum and William Isaac, ‘To Predict and Serve?’, Significance 13 (2016), p. 15.
11
Ibid.
12
Ibid., pp. 16-19.
13
Ibid., p. 18.
Facial Recognition Technology
► China is a world leader in facial recognition technology.
► YITU
► Megvii
► SenseTime
► Hikvision
► Facial Recognition and Race – MIT case study involving facial recognition
technology from Microsoft, IBM, and Megvii14
► Higher misidentification rate for dark-skinned people. Why? 15

14
Timothy Revell, ‘Face-recognition software is perfect – if you’re a white man’, New Scientist, 13 February 2018. Available online at
https://www.newscientist.com/article/2161028-face-recognition-software-is-perfect-if-youre-a-white-man/ (Last accessed on 12 July 2019).
15
Ibid.
Class Activity 8

► Zoom Poll - When performing searches on Google, do you often look past
the first page of the search results? (Yes/No)
► What do I need to do in order to appear on the first page of Google? Let’s ask
Google!
► Google: how to get on the first page of google
► What type of advice do you see? Which companies/individuals/organizations
are offering the advice?

*Trialled in the July 2020 semester.


Class Activity 8 Follow-up: Safiya Umoja Noble – ‘Algorithms of
Oppression’
► ‘black girls’; ‘Latinas’; ‘Asian girls’ (Noble, Algorithms of Oppression, p. 11)
► ‘apes’; ‘animals’; ‘gorillas’ (Noble, Algorithms of Oppression, pp. 6, 9)
► ‘… what is most popular is simply what rises to the top of the search pile’? (Noble, Algorithms
of Oppression, p. 15)
► ‘Search results are more than simply what is popular. The dominant notion of search results as
being both “objective” and “popular” makes it seem as if misogynist or racist search results
are a simple mirror of the collective.’ (Noble, Algorithms of Oppression, pp. 35-36)
► ‘… what shows up on the first page of search is typically highly optimized advertising-related
content, because Google is an advertising company and its clients are paying Google for
placement on the first page either through direct engagement with Google’s AdWords program
or through a gray market of search engine optimization products that help sites secure a place
on the first page of results.’
- Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce
Racism (Kindle ed.) (New York, NY: New York University Press, 2018), p. 115.

*Trialled in the July 2020 semester.


Class Activity 6, 7, and 8: Follow-up

► Now that you know that the high-ranking search results are heavily
‘gamed’, do you think there’s a stronger case for modifying algorithms
in order to produce ‘sanitized’ results that serve the public interest?
► If you think there’s a stronger case for doing so, what can you do to
convince the tech giants to adjust their algorithms?
► Type your response in the Zoom chat box.

*Trialled in the July 2020 semester.


Healthcare
► The use of AI is increasing in healthcare: e.g. Infervision (based in Beijing)
► But the rise of AI in healthcare and medicine could worsen health inequalities. 27
► Incomplete data sets28
► Exclusion of ethnic minority groups and women from healthcare/medical
research.29
► The intertwined relationship between economic inequality and health
inequality.30
► These risks have always existed in healthcare. Why could AI make them worse? 31

27
Dhruv Khullar, ‘AI could worsen health disparities’, The New York Times, 31 January 2019. Available online at https://www.nytimes.com/2019/01/31/opinion/ai-bias-
healthcare.html (Last accessed on 12 July 2019).
28
Ibid.
29
Ibid.
30
Ibid.
31
Ibid.
Healthcare
“To minimize this risk, data must be representative of the population at large and the
benefits it confers must be available to all. The digital divide is a threat to this ideal.
Social determinants of health still shape access to technology. About 11% of U.S. adults
do not use the Internet. Low-income communities and households are about four times
more likely than middle- or high-income communities to lack access to broadband
technology. Similarly, 21% of uninsured patients do not use the Internet and a much
larger percentage of patients do not seek health information online. Thus, differential
access to technology not only threatens the representativeness of the data that populate
our big data models and inform the resultant algorithms, but also undermines the
potential of big data to improve the lives of the most vulnerable people”

Big Data Analytics and the Struggle for Equity in Health Care: The Promise and Perils
Said A. Ibrahim, Mary E. Charlson, Daniel B. Neill
Health Equity. 2020; 4(1): 99–101. Published online 2020. doi: 10.1089/heq.2019.0112

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7133425/
Class Activity 12: Survival of the Best Fit

► In preparation for the DD/MM seminar, please play the following online educational
game: 'Survival of the Best Fit'. 
► It's available online at: https://www.survivalofthebestfit.com/
► You should be able to complete the game in about 10min.
► 'Survival of the Best Fit' illustrates the dangers of AI bias through the operations of a
mock AI recruitment tool.

► Please post your reflections on the game and on AI bias in this thread.
Recruitment
► AI is also being used in recruitment decisions.
► But it may reinforce the biased trends in a company’s historical employment
data.32
► E.g. Amazon revealed last year that its AI recruitment programme discriminated
against women.33

32
Juliet Eccleston, ‘AI for recruitment: can robots really offer HR teams complete bias-free hiring’. HRZone, 14 May 2019. Available online at
https://www.hrzone.com/talent/acquisition/ai-for-recruitment-can-robots-really-offer-hr-teams-complete-bias-free-hiring (Last accessed on 12 July 2019).
33
Isobel Asher Hamilton, ‘Why’s it totally unsurprising that Amazon’s recruitment AI was biased against women’, Business Insider (Singapore), 13 October 2018. Available
online at https://www.businessinsider.sg/amazon-ai-biased-against-women-no-surprise-sandra-wachter-2018-10/?r=US&IR=T (Last accessed on 12 July 2019).
Class Activity 12: Survival of the Best Fit

► In preparation for the DD/MM seminar, please play the following online educational
game: 'Survival of the Best Fit'. 
► It's available online at: https://www.survivalofthebestfit.com/
► You should be able to complete the game in about 10min.
► 'Survival of the Best Fit' illustrates the dangers of AI bias through the operations of a
mock AI recruitment tool.

► Please post your reflections on the game and on AI bias in this thread.
Class Activity 11: The voices behind your AI voice assistants

► Siri: https://www.youtube.com/watch?v=z2bTymnb1uE
► Alexa: https://www.youtube.com/watch?v=HkanSVgHeYU
► Cortana: https://www.youtube.com/watch?v=YFweSyEQiv0
► Google Assistant: https://www.youtube.com/watch?v=JvbHu_bVa_g
► Do you notice a pattern here? Type your response into the Zoom chat
box.
Class Activity 11: Follow-up - The voices behind your AI voice
assistants …
► ‘I think that probably reflects what some men think about women – that they’re not
fully human beings.’
- Kathleen Richardson (Quoted in Adrienne Lafrance, ‘Why Do So Many Digital Assistants Have
Feminine Names?’, The Atlantic, 30 March 2016. Available online at
https://www.theatlantic.com/technology/archive/2016/03/why-do-so-many-digital-assistan
ts-have-feminine-names/475884/
.)
► ‘It’s much easier to find a female voice that everyone likes than a male voice that
everyone likes.’
Clifford Nass (Quoted in Adrienne Lafrance, ‘Why Do So Many Digital Assistants Have
Feminine Names?’, The Atlantic, 30 March 2016. Available online at
https://www.theatlantic.com/technology/archive/2016/03/why-do-so-many-digital-assistan
ts-have-feminine-names/475884/
.)
Social Stability in the Age
of the Tech Giants
The Online Spread of Fake News and
Harmful Content
Social Media Algorithms and the Spread of Fake News and Harmful Content

► Social media platforms are designed to maximise user engagement. 34


► Give the users what they want: social media algorithms recommend content to
users based on their activity history.35
► A user with a history of consuming fake news and harmful content may not be
sufficiently exposed to content that will persuade him/her to question his/her
beliefs.36

34
Katherine Viner, ‘How technology disrupted the truth’, The Guardian, 12 July 2016. Available online at https://www.theguardian.com/media/2016/jul/12/how-
technology-disrupted-the-truth (Last accessed on 23 June 2019).
35
Ibid.
36
Ibid.
The Emergence of ‘filter bubbles’37
► Major tech platforms offer personalised results to their users, leading to the
emergence of ‘filter bubbles’.38
► E.g. Personalised search results.39
► To make matters worse –
Del Vicario, Bessi, Zollo, Petroni, Scala, Caldarelli, Stanley, and Quattrociocchi:
‘Users tend to aggregate in communities of interest, which causes
reinforcement and fosters confirmation bias, segregation, and
polarization.’40

37
Ishtiaque Hossain, ‘Filter bubbles are shrinking our minds’, HuffPost, 14 September 2019. Available online at https://www.huffingtonpost.in/ishtiaque-
hossain/filter-bubbles-are-shrinking-our-minds_a_21469747/ (Last accessed on 12 July 2019).
38
Ibid.
39
Ibid.
40
Michela Del Vicario, Alessandro Bessi, Fabiana Zollo, Fabio Petroni, Antonio Scala, Guido Caldarelli, H. Eugene Stanley, and Walter Quattrociocchi, ‘The
spreading of misinformation online’, Proceedings of the National Academcy of Sciences of the United States of America 113 (2016), p. 558.
Journalism Then and Now
► Humans have always (1) enjoyed being told things that they want to hear (2)
and (2) have always sought the company of like-minded people.
► Respectable newspapers and news organisations have always adhered to strict
ethical and professional standards – Matt Yglesias: ‘a commitment to accuracy
and basic integrity.’ 41
► See, for example, the Editors’ Code of Practice in IPSO’s (Independent Press
Standards Organisation) website (https://www.ipso.co.uk/).

41
Matthew Yglesias, 'The Case against Facebook', Vox, 9 April 2018. Available online at https://www.vox.com/policy-and-politics/2018/3/21/17144748/case-
against-facebook (Last accessed on 17 July 2019).
Journalism Then and Now
► Today: Massive supply of online content providers not bound by traditional
ethical and professional standards.
► Margaret Simons: ‘Today, just about anyone with an internet connection and a
social media account has the capacity to publish news and views to the world.
This is new in human history.’42
► Social media algorithms that give people what they want to hear + Ease of
online coordination of ‘communities of interest’ + large supply of unprofessional
content = disruption of socio-political stability

42
Margaret Simons, ‘Journalism faces a crisis worldwide – we might be entering a new dark age’, The Guardian, 15 April 2017. Available online at
https://www.theguardian.com/media/2017/apr/15/journalism-faces-a-crisis-worldwide-we-might-be-entering-a-new-dark-age (Last accessed on
12 July 2019).
Social Stability in the Age
of the Tech Giants
Automation and Unemployment
The Impact of Automation on the Future of Work: The Optimistic View

► Automation will both destroy and create jobs; new jobs created will make up
for job losses.43
► History will repeat itself.44
► Higher productivity Lower prices Higher demand Job creation 45
► Higher profits Higher wages Higher consumption Job creation 46
► Lower prices Higher real wages Higher consumption Job creation 47
► Technological progress rarely results in the automation of entire jobs. 48

43
Henry J. Holzer, ‘Will robots make job training (and workers) obsolete? Workforce development in an automating labor market’, Brookings, 19 June 2017. Available online
at https://www.brookings.edu/research/will-robots-make-job-training-and-workers-obsolete-workforce-development-in-an-automating-labor-market/ (Last accessed on 1
July 2019).
44
Ibid.
45
Sarah Kessler, ‘The optimist’s guide to the robot apocalypse’, Quartz, 9 March 2018. Available online at https://qz.com/904285/the-optimists-guide-to-the-robot-
apocalypse/ (Last accessed on 22 June 2019).
46
Ibid.
47
Ibid.
48
Ibid.
The Impact of Automation on the Future of Work: The Pessimistic View

► Job losses due to automation will vastly exceed job gains. 49


► History will not repeat itself. AI is leading to the automation of learning and
understanding.50
► Universal Basic Income (UBI) schemes may be needed to deal with the impact of
automation on unemployment.51

49
Henry J. Holzer, ‘Will robots make job training (and workers) obsolete? Workforce development in an automating labor market’, Brookings, 19 June 2017. Available online
at https://www.brookings.edu/research/will-robots-make-job-training-and-workers-obsolete-workforce-development-in-an-automating-labor-market/ (Last accessed on 1
July 2019).
50
Gary Grossman, ‘It’s time for workers to worry about AI’, VentureBeat, 7 April 2019. Available online at https://venturebeat.com/2019/04/07/its-time-for-workers-to-worry-about-ai/ (Last
accessed on 22 June 2019).
51
For an explanation of UBI, see Kelsey Piper, ‘The important questions about universal basic income haven’t been answered yet’, Vox, 13 February 2019. Available online at
https://www.vox.com/future-perfect/2019/2/13/18220838/universal-basic-income-ubi-nber-study (Last accessed on 30 June 2019).
Class Activity 10: Automation and Unemployment
► Interactive Content: David Johnson, ‘Find out if a robot will take your job’,
Time, 21 April 2017. Available online at
https://time.com/4742543/robots-jobs-machines-work/. Last accessed on 24
September 2020.
► Enter as many job categories as you can into the ‘Will a Robot Take my Job?’
box in the middle of the page. Please enter a broad variety of job categories:
blue-collar jobs, white-collar jobs, etc.
► Take note of the job categories that are highly vulnerable to automation and
the job categories that are less vulnerable to automation. Type your answers
in the Zoom chat box.
► Reflect on the similarities between the jobs in each list. What do these
similarities tell us about the potential and the limits of automation?
Social Institutions in the
Age of the Tech Giants
Surveillance Capitalism and the
Surveillance State
The Tech Giants – Natural Allies of Democracy?
► Arab Spring protests (2011)
► Protestors used major tech platforms to coordinate their uprisings and promote
their cause.52
► Commentators concluded that the rise of the tech giants would promote and
strengthen democracy.53

52
Zack Beauchamp, ‘Social Media is rotting democracy from within’, Vox, 22 January 2019. Available online at
https://www.vox.com/policy-and-politics/2019/1/22/18177076/social-media-facebook-far-right-authoritarian-populism (Last accessed on 23 June 2019).
53
Ibid.
The Tech Giants – A Danger to Democracy?
► Authoritarian groups within established democracies are using social media to
expand their influence and popularity.54
► The rise of ‘surveillance capitalism’ – defined as the collection and monetisation of
users’ data – is also now seen as a threat to democracy.55
► Cambridge Analytica scandal56
► The tech giants are able to threaten democracy by using their data for political
purposes.57

54
Zack Beauchamp, ‘Social Media is rotting democracy from within’, Vox, 22 January 2019. Available online at
https://www.vox.com/policy-and-politics/2019/1/22/18177076/social-media-facebook-far-right-authoritarian-populism (Last accessed on 23 June 2019).
55
The term ‘surveillance capitalism’ was created by Shoshana Zuboff. See John Naughton’s interview of Shoshana Zuboff in John Naughton, ‘“The goal is to automate us”:
welcome to the age of surveillance capitalism’, The Guardian, 20 January 2019. Available online at https://www.theguardian.com/technology/2019/jan/20/shoshana-zuboff-age-
of-surveillance-capitalism-google-facebook (last accessed on 26 June 2019).
56
See Alvin Chang, ‘The Facebook and Cambridge Analytica scandal, explained with a simple diagram’, Vox, 2 May 2018 [Available online at
https://www.vox.com/policy-and-politics/2018/3/23/17151916/facebook-cambridge-analytica-trump-diagram (Last accessed on 1 July 2019)]; and Sean Illing, ‘Cambridge Analytica, the shady data
firm that might be a key Trump-Russia link, explained’, Vox, 4 April 2018 [Available online at https://www.vox.com/policy-and-politics/2017/10/16/15657512/cambridge-analytica-facebook-
alexander-nix-christopher-wylie (Last accessed on 1 July 2019)].
57
Chase Johnson, ‘Big tech surveillance could damage democracy’, The Conversation, 4 June 2019. Available online at https://theconversation.com/big-tech-surveillance-could-damage-democracy-
115684 (Last accessed on 24 June 2019).
Big Data, Surveillance Capitalism, and the Surveillance State
► The Chinese tech giants’ business model is also built on surveillance capitalism.
► Baidu
► Alibaba
► Tencent
► Big Data has possibly strengthened the Chinese government’s control over society
and the economy.58
► Big Data has strengthened the Chinese government’s confidence in centralised economic
planning.59
► The concentration of data collection in the hands of a few Chinese tech giants has
increased the Chinese government’s confidence in its ability to shape and control society
and the economy.60

58
Sebastian Heilmann, ‘Big data reshapes China’s approach to governance’, The Financial Times, 29 September 2017. Available online at
https://www.ft.com/content/43170fd2-a46d-11e7-b797-b61809486fe2 (Last accessed on 24 June 2019).
59
Ibid.
60
Ibid.
The Social Credit System
► The Social Credit System: The scoring of citizens’ behaviour. Benefits/penalties
will be awarded/meted out according to one’s social credit score. 61
► China is not the first or the only country to rate and shape human behaviour.
► FICO credit Score62
► Schufa63
► COMPAS
► PredPol

61
Alexandra Ma, ‘China has started ranking citizens with a creepy “social credit” system – here’s what you can do wrong, and the embarrassing, demeaning ways
they can punish you’, Business Insider (Singapore), 29 October 2018. Available online at https://www.businessinsider.sg/china-social-credit-system-punishments-
and-rewards-explained-2018-4/?r=US&IR=T (Last accessed on 12 July 2019).
62
For an explanation of the FICO credit score, see Justin Pritchard, ‘How credit scores work and what they say about you’, The Balance, 15 December 2018.
Available online at https://www.thebalance.com/how-credit-scores-work-315541 (Last accessed on 30 June 2019).
63
Cathrin Schaer, ‘Germany edges toward Chinese-style rating of citizens’, Handelsblatt Today, 17 February 2018. Available online at
https://www.handelsblatt.com/today/politics/big-data-vs-big-brother-germany-edges-toward-chinese-style-rating-of-citizens/23581140.html?ticket=ST-953057-
mP0vUj9ysq7UzciqSdcT-ap2 (Last accessed on 30 June 2019).

You might also like