Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 25

Ethics of Big Data

Puneet Arora
Management Development Institute Gurgaon

1 / 25
Change in the Digital Age

Buckminister Fuller’s assertion in the early 80s that the Knowledge


Doubling Curve, wherein prior to 1900 human knowledge doubled
every century, sped to every quarter-century by the mid-40s.
In 1982, the rate of doubling, attributable to technological advances,
is considered to be every 13 months,
Experts estimated that by 2020, human knowledge will double
every 12 hours.
This rapid pace of change brings opportunities and advantages
Also anxiety as well as concerns that AI, made possible through big
data, will make humans obsolete
Ethical guidelines exist but often lag behind and become
obsolete before implementation.
Ref: Jurkiewicz, Carole L. ”Big Data, big concerns: ethics in the digital
age.” Public Integrity 20, no. sup1 (2018): S46-S59.

2 / 25
Impact of Change on Humans

Change disrupts homeostasis, induces cognitive dissonance, and


amplifies social influence and emotional reactions.
Technology introduces efficiencies but demands constant learning
and adjustment.
Technology also facilitates bullying and anonymity, making targeted
aggression more acceptable.
Big data drives market dominance but also fuels hacking, computer
viruses, and advocacy of extreme causes.
Such concerns leave individuals feeling helpless, gullible, and
manipulated, affecting psychological well-being.

3 / 25
What is Big Data

Big Data: A term coined by NASA researchers in 1997.


Evolving definition: Massive volumes of data collected through
technology, requiring innovative processing.
Led to systems that improve the quality of life
Led to proprietary and government uses that many view as
secretive and manipulative
Led to ethical concerns that have yet to be fully articulated
or addressed

4 / 25
Concerns with Big Data

Massive and Rapid Data with Diverse Applications


Whether it is a Website; social media forum; text; application;
software; or anything accessed through technological devices,
individuals, by virtue of use, are implicitly agreeing to allow
those companies to bundle and use or sell that data without
hindrance.
Everything one says or does using technology effectively creates
a currency from which others are profiting.
Coders, about whom little is known, are subcontracted by
a technological entity to create the algorithms
What we see, have access to, or are aware of is limited by
these algorithms, and their ethical impacts are unknown
Black box - We don’t know why an AI tool makes the predictions
that it does (Quantum computers)

5 / 25
Examples of Big Data gone wrong

Self-driving autos deciding suicide by collision is preferred


over impacting another vehicle
Crashing into trucks the color of which the system cannot
differentiate from a skyline
The usurping of votes in elections
Governments spying on citizens
Profiling by criminal justice entities
Creating technological
dependencies in order to exercise
influence

6 / 25
Data Collected Under the Guise of Social Betterment

Data Collection for Profit


Ethical Concern: Framing as Social Betterment
Eg: iPhone X Facial Recognition
Eg: Genomic Database
Eg: Turnitin

7 / 25
Facilitating Unequal Wealth Distribution

Concerns About AI and Dehumanization


Job Replacements by AI with Disproportionate Effects on Unskilled
Labor
Economic Impacts on Different Social Groups (Eg: discriminating
R´esum´e-screening AI, engg. tried to fix it, but they couldn’t)
To determine parole, the U.S. criminal justice system has used
prediction algorithms that replicated historical biases against Black
defendants.
Expected Increased Social Strife and Concentration of Power

8 / 25
Increasing Social Hostilities for Political Gain

Use of Fake News Bots and ”Alternative Fact” Websites


Exposure to Falsehoods in the 2016 U.S. Presidential Election
Algorithmic Feeding of Misinformation
Threat to Democracy with Polarizing Leaders and Initiatives
Eg: Russian Influence in the 2016 U.S. Election; Voter Profiling by
Cambridge Analytica

9 / 25
Increasing Social Hostilities for Political Gain

Targeting Demographics
Encouragement of Discrimination and Hate Speech
Manipulation of Public Sentiment
Increasing social hostilities for political gain

10 / 25
Concealed Data Collection for Profit

Ubiquitous Camera Systems


One’s sexual orientation can be determined with 91%accuracy
by analyzing facial features, information that has been used to
discriminate, disadvantage, and violate individual privacy
These visual databases used by governments for criminal
capture, security-based identification, and political and social
activism, as well as lip-reading with instant language translations,
furthering concerns about invasions of privacy

11 / 25
Concealed Data Collection for Profit

Apple’s Siri, Amazon Echo, Microsoft’s Cortana, and Google Home


promise to streamline human activities and efficiencies, yet collect
and record all sounds, even when not in use.

12 / 25
Facilitating Human Dependencies on Technology

Deepening Dependencies on Technology


Decreasing Complex, Logical Thinking
Reduced Crisis Handling Abilities
Dependence on Algorithmic Decision-Making
Shaping Opinions and Behaviors
Manipulation of Decision-Making
Dependence on Social Media
Rise in Depression and Suicide
Social and Psychological Dysfunction

13 / 25
Undermining Individual Integrity

The Bystander Effect: facilitate the filming of acts of violence;


deaths; discrimination; and life-threatening circumstances without
offering assistance
Most negative, hate-filled, and discriminatory messages are most
contagious on social media and most influential in establishing
social and ethical norms
Axon’s Data Collection from Police Cameras
Selling Data to Various Organizations

14 / 25
Summary

Big Data, Privacy Violations and Influence on Choices


Profit-Driven Capitalism
Failures of Self-Policing and Ethical Codes of Big Data
firms Calls for Regulation and Data Protection
Urgency of Addressing Ethical Issues in Big Data
Enforcement Challenges

15 / 25
HBR article: Ethics in AI

FB grew to 100 million users in 4.5 years, before anyone


could understand the problems it can cause
In 2015, its role in political manipulation was exposed in
Cambridge Analytica scam
In Myanmar, FB amplified disinformation against Rohingyas, which
culminated in a genocide in 2016
In 2021, a WSJ reported that Instagram in 2012 had conducted
research showing that the app was toxic to the mental health of
teenage girls (which was later acquired by FB)
Ref: HBR article, ”How to Avoid the Ethical Nightmares of Emerging
Technology”

16 / 25
Problem and New Technologies

Critics say that social media companies should have proactively


avoided ethical catastrophes
But! Now we are seeing another tech revolution - Generative AI
In 2 months, ChatGPT had 100 million subscribers
Within months came many others - Bing, Bard, Gemini
Other tech: Quantum computing for big data crunching in quick
time; blockchain; virtual reality, robotics; gene editing - have the
potential to reshape the world for good or ill

17 / 25
AI

Vast majority of AI is ML
If you give your software lots of interview-worthy resumes, it’ll learn
and label new resumes as worthy or not interview worthy
Software is just recognizing and replicating patterns
Those patterns may be replicating historical or contemporary biases
Amazon resume screening AI was found biased (coders couldn’t fix)
US criminal justice system AI predicted biases against Blacks
ChatGPT may manipulate people and has high environmental costs
due to high level computing power used

18 / 25
Quantum Computing

Perform calculations in minutes or seconds what today’s


supercomputers take thousands of years to perform
IBM and Google investing heaviliy in it
Our computers will integrate with this tech increasingly
Major problem: An unexplainable Blackbox
Because Quantum computers process trillions of data points,
findings whether the suggests are ethical or reasonable
become increasingly difficult
Questions: Under what conditions can we then trust such blackbox
models? What do we do if we find the system broken/has issues?
Do we just accept these outputs in front of our limited human
intelligence?

19 / 25
Blockchain

Suppose several of your friends have a magical notebook, where


whatever one writes shows in everyone’s notebook, stays written
and can never be deleted.
You may write your assets on it, when you transfer your assets
to someone that transaction also reflects on everyone’s notebook
This is how blockchain works
Follows a set of rules written in its code, changes in rules are
decided by whoseoever runs the blockchain
Quality of blockchain depends on: What data belongs on blockchain
and what doesn’t? Who decided what goes on? What are the
criteria for what is included? Who monitors? What’s the protocol if
error is found in the code? How are voting rights and power
distributed?
Bad governance can lead to people losing their saving, having their
info revealed against their will
Since there isn’t just one type of blockchain and basic rules of
a blockchain are hard to change, early development decisions a
20 / 25
Corporate approach while developing/adopting

Against a typical approach of starting with “lets see how it


goes” and then taking decades to undo the harm, a new approach
is required
Need to develop, apply and monitor in ways to avoid the worst
case scenarios
For companies that procure and adapt these technologies (for
instance, ChatGPT) need to figure how to design and deploy it in a
way that keeps their people and product safe.

21 / 25
Major problem with corporates

Replacing ethics with less precise terms (eg: Sustainability, ESG)


Eg: Value driven, mission driven, purpose driven have not much to
do with ethics
Customer obsessed and Innovation aren’t ethical values
When the need arose for AI ethics, firms responded by saying
”We too are for responsible ethics”
We need to name our problems accurately if we are to address
them effectively

22 / 25
Major problem with corporates

AI Ethics to Responsible AI: First, shifts focus on a broad set of


issues like cybersecurity, regulation, technical risks, etc. which make
the experts focus on the areas they are experts on , except the
ethics
Second, leaders get stuck at abstract principles/values (eg fairness,
autonomy)
Third, the objective of responsible AI gives companies a vague
goal with vague milestones (eg: We are for transparency and
equity)
But there is no articulation of what ethical failure looks like
But when ethical nighmaters happen, the results are
specific:
discrimination by Amazon recruitment AI program, violating
people’s
privacy, etc.
23 / 25
Solution: Business (Leader) and Not Tech Problems

Read the article for further discussion on role of


leader

24 / 25
Amazon Case

Does Amazon Shopper’s Panel represent an opportunity or threat


to customers?
Should customers line up for the waitlist or sit this one out?
What is the larger impact this could have on the market,
given Amazon’s huge presence?

25 / 25

You might also like