Download as pdf or txt
Download as pdf or txt
You are on page 1of 59

Ethics of AI Technology

1
Topics
• Artificial Intelligence (AI): promise and peril
• Surveillance Capitalism
• Ethical guidelines

2
Artificial Intelligence

3
Value of intelligence
• Surveillance
• Traditionally, government intelligence services have been tasked, and given the sole
prerogative to gather intelligence against both internal and external ‘enemies of the State’
• The value of intelligence is appreciated by governments which have devoted huge budgets,
and developed both the technology and capability to achieve data gathering and surveillance
at scale
• Big Data to intelligence
• Big Data: the digital revolution (smartphones, video cameras, digital broadband, massive
storage and processing capability) has dramatically increased the volume and range of data
that can be surveilled
• AI has emerged as the key to turning Big Data into intelligence (actionable data for users)
• Governance & oversight
• Actionable data confers great power over the lives of individuals
• Countries have always placed intelligence gathering agencies and capabilities under strict
oversight.

4
Transforming cities with technology

https://www.youtube.com/embed/hRY-ZUlJXY0?start=470
https://www.youtube.com/embed/hRY-ZUlJXY0?start=723

5
Smart City initiatives
• Smart City
• concept to improve government services, especially at the municipal level
• Transacting with public agencies
• Increasingly going digital, increasing convenience and reducing cost to both the
government and citizens
• Applications, approvals, payments, notifications increasingly accessible online
• Real time operations
• Data on environmental factors and operating conditions of physical infrastructure is
used to monitor and improve real time operation
• Parking spaces, travel time to destination, traffic signals, road pricing
• City planning becomes data driven
• Land-use, transportation demand and traffic planning models become more
accurate.

6
Value of AI to business & public
• Social media and e-commerce
• Increasingly, B2B, B2C transactions are going online
• Businesses and consumers are using social media to make contact and initiate
transactions with each other
• The data generated is not just transaction data but data about the
connections and interactions on social media
• AI and Big Data have the power to make sense of data
• provide greater data insights on customers & suppliers
• better threat protection, especially in cyber-security
• more efficient and effective automation
• more effective management of physical assets using Digital Twins

7
AI and Big Data
• AI thrives on data and information
• The more data, the better the outcome
• Increasing seamless connection of pools of data collected for different
purposes
• Ubiquitous and continuous data collection
• Seamless integration of physical worlds with virtual worlds
• Convergence of factors
• makes it possible to exploit the unique capabilities of AI,` and made even
more effective.

8
What AI can do
1. Scan, search, pick-out at scale based on features and patterns
2. Connect the dots
• between features in previously isolated pools of data thereby creating rich profiles
3. Amplify weak signals in the data to detect trends
4. Create deep fakes
• mimic human capabilities, traits and foibles to such an extent that it is difficult to
distinguish works created by a bot or a human being
5. Predict based on individualized models
6. Classify automatically, thereby discriminating against certain groups
• Often, there is no means or recourse for addressing mistakes.

9
Terminator 2/ Sylvester Stallone

https://www.youtube.com/embed/AQvCmQFScMA?start=12
10
Deepfake Videos Are Getting Terrifyingly Real

https://www.youtube.com/watch?v=T76bK2t2r8g

11
Deep fakes
• Artificial creations/ facsimiles that are indistinguishable from the real thing
• Derived by information manipulation
• Meant to obfuscate, and deceive
• Different forms of information manipulation
• Direct altering of actual text, images and video to present something false
• Selected cropping, editing or filtering of information to present a biased view
• Creating synthesized composites using machine learning so that it is impossible to
distinguish fake from real.
• Deep fakes are a growing concern
• More commonly deployed in social media
• Becoming easier to create, even by the layman with publicly available tools
• Becoming harder to detect without sophisticated tools and techniques.

12
What AI can’t do (yet)
• Not very good at planning, especially for general and novel problems;
• Doesn’t have much common sense understanding of causation in the real world
• Not good at answering or posing "what-if" questions
• Cannot generalize what it has learnt
• Has difficulty transferring knowledge learnt on one task to another task even if both tasks share
common features
• Cannot recognize its own biases in decision making
• Cannot be creative, or create things outside the given rules and parameters
• Cannot feel the emotion called ‘empathy’, and demonstrate empathy in its speech and behavior
• Does not have free will and cannot exercise free choice – always doing the will of its creator
• Cannot use moral judgement, and reason ethically
• Cannot explain its decisions to humans
• Cannot reason about its own existence
• who are we? what makes us different? what is our purpose in life? how do we know we are alive?

13
Addressing the concerns about deep fakes
• Using technology alone to combat deep fakes
• could make things better for some and worse for others
• A focus on technological features alone
• results in an 'arms race' between the good and bad guys

14
…addressing concerns
• Laws can make it illegal to use such technology for specific purposes
• Laws need to be carefully scoped otherwise they will stifle progress and be
regarded as oppressive
• Laws are always one step behind technology and human ingenuity
• Laws to make public certain information
• Deep fakes must be identified as such
• Firms must register their use of deep fakes for specific purposes

15
… addressing concerns
• Redress
• Laws can make it easier for victims of deep fakes to seek redress
• Public education about deep fakes and increase awareness
• Duty of individuals
• Develop a sense of healthy skepticism – if it’s too [ X ] to be true, it probably
isn’t
• Verify information against trusted sources before acting
• Do not spread fake news
• Be less judgmental

16
AI and cybersecurity
• Uses of AI in cybersecurity
• Can be used to detect malicious activity
• Enhance tracing and understanding threats even as they develop
• Provide forensic capability to detect deep fakes
• Predict occurrence of threats based on models.
• Abuses include
• Selecting victims for attacks
• Generating deep fakes and phishing at scale
• Adversarial AI learning techniques can be used to defeat AI learning models

17
A game of cat and mouse
• One of the challenges of using AI to
improve cybersecurity is that it’s a two-way
street, a game of cat and mouse.
• If security researchers use AI to catch
hackers or prevent cyberattacks, the
attackers can also use AI to hide or come
https://www.youtube.com/embed/
U1nFtnV-U04?start=30&end=106
up with more effective automated attacks.

18
Promise or peril
• AI combines the aspects of human intelligence with the brute force
and tenacity of machines
• Used wisely, AI can fulfill its promises
• Misused, AI can be a detriment and peril to individuals, organizations,
and society overall.

19
Surveillance Capitalism

20
The age of Surveillance Capitalism

21
Shoshana Zuboff
The age of surveillance capitalism
• Our personal and private experiences have been hijacked by Silicon
Valley
• and used as the raw material for extremely profitable digital products.
• Surveillance
• operation is undetectable, indecipherable, cloaked in rhetoric
• aims to misdirect, obfuscate, and bamboozle all of us all the time.
• Surveillance capitalism is not limited to
• when we are online, or limited to the pushing of ads.

22
Trade-offs
• Pro
• Services which use personal data are useful
• Services are free
• I have nothing to hide, and therefore nothing to fear
• Con
• Information collected is not limited to what has been given by consent
• We have a false sense of control over actions of firm after consent is given
• Our data gets passed on to third parties beyond the first
• Companies collect more data than what is strictly necessary to improve
service.

23
Justification to collect data
• Companies give the reason of 'service improvement' to collect data
• One has the choice of not using the services
• If you don't give the data, we can't provide the service which
sometimes requires the need to involve third parties
• At least a thousand other companies can potentially use or collect data once
agreement is given to the first party
• We have no responsibility for how third parties use your data.

24
Cloak of secrecy
‘The operations that gather, analyse data to make models and
predictions have been designed to be undetectable, indecipherable,
cloaked in rhetoric, aims to misdirect, obfuscate, and bamboozle all of
us all the time.’ – S.Zuboff
• Why the deception?
• Subjects of surveillance might object if they knew about the pervasiveness
and comprehensiveness of the operation
• the quality of the behavioral data is compromised if subjects know they are
being watched.

25
Residual data
• The least valuable information
• Surprisingly, the information that we consented to give is the least valuable
• Other data is collected
• current and past physical locations
• employment, preferences, social networks, affiliations, hobbies
• thoughts, fears about health & finances
• vices
• how fast you drive or walk
• how often you visit a site (physical or otherwise)
• your daily routine
• We leave ‘digital traces’ behind when we visit sites in search of information
• Originally treated as useless or waste, it was eventually realized that this data could be
used to harvest a wealth of contextual detail about individuals
• Businesses would pay a lot of money to know more about us.

26
Personalized predictive models
• Deep learning
• Residual data is analysed and used to train 'models' of patterns of human
behavior
• These models differ from previous models
• They are richly detailed in features
• Capture patterns at the atomic (individual) level
• Previous models could only capture aggregated descriptive statistics
• incorporate not only personal characteristics and location but also time.
• The models are used to predict what people will do.
https://www.youtube.com/embed/hIXhnWUmMvw?start=237&end=494

27
Behavioral surplus
• You can predict the behavior of groups of people
• Residual data becomes the raw material to feed the predictive model
factory
• Behavioral surplus
• the monetary value derived from selling the models even after taking into
account the cost of providing the ‘free’ services.

28
… Behavioral surplus
• Predictions are sold to the highest bidder
• Public has no idea who they are sold to, or how these predictions will be used
• Intention of organizations that buy these predictions
• Improve the effectiveness and efficiency of whatever it is that they do
• Lack of transparency about what the models predict
• or what data has been used for the models
• We only learn of these models when somebody blows the whistle, or
companies are forced to reveal their practices when under investigation.

29
Surveillance and contact tracing
• During the pandemic, tech companies are stepping into a new field —
public health.
• Google and Apple have announced an initiative to support contact tracing through
Bluetooth technology, which tracks when people come in contact with another
person
• Should someone get infected, it becomes easier to trace who he has been in contact
with
• Less than half of people surveyed agreed to share their contact tracing
data, whereas we have been told that it takes at least 60% of us to agree
for it to be effective
• However, that attitude might change if people’s sense of security becomes
threatened.

30
Increasing surveillance opportunities
• Home assistants
• the presence of a hidden microphone that captures voice and sounds in your
home. How would you like to host a bugging device in your home?
• Smartphones
• became the ultimate tool of choice in the game of surveillance
• Google made the Android operating system of the smartphone almost free, as
well as lots of software apps.
• Car cameras
• How about turning your car into a surveillance vehicle?
https://www.youtube.com/embed/a-CbKbLPRI0?start=104

31
Gamify behavior
• Technology convergence on the smartphone
• Augmented reality, location tracking, internet connectivity everywhere, and
ubiquitous services have converged on our smartphones and raised the game for
surveillance capitalism
• Gamify - to turn an activity or task into a game
• Games on smartphones offer entertainment, convenience and instant membership
of a social tribe
• Games reward/ punish behavior
• Developers can control the system of rewards to serve whatever purpose they desire
• From clickthrough to footfall
• If before the goal was to attract web page visits (eyeballs), and convert them to the
action of clicking on links (clickthrough), games like Pokemon, which blend the virtual
and real world, now promise to deliver customers to the doorstep of businesses
(footfall).
https://www.youtube.com/embed/hIXhnWUmMvw?start=700&end=1209

32
Persuasion through subliminal cues
• Facebook discovered that they
can manipulate subliminal cues
online to change real world
behavior or emotion, and they
can do so without the subject
being aware of it.
• Human populations are made
the subject of social experiments
that they did not agree to.
https://www.youtube.com/embed/hIXhnWUmMvw?start=700&end=840

33
Overton’s window
Imagine, if you will, a yardstick standing on end. On either end are the
extreme policy actions for any political issue. Between the ends lie all
gradations of policy from one extreme to the other. The yardstick
represents the full political spectrum for a particular issue.
The essence of the Overton window is that only a portion of this policy
spectrum is within the realm of the politically possible at any time.
Regardless of how vigorously a think tank or other group may
campaign, only policy initiatives within this window of the politically
possible will meet with success.

https://www.mackinac.org/7504

34
… Overton’s window
A politician’s success or failure stems from how well they understand
and amplify the ideas and ideals held by those who elected them.
They will almost always constrain themselves to taking actions within
the "window" of ideas approved of by the electorate. Actions outside
of this window, while theoretically possible, and maybe more optimal
in terms of sound policy, are politically unsuccessful. Even if a few
legislators were willing to stick out their necks for an action outside the
window, most would not risk the disfavor of their constituents.

35
Shift the window
• Since commonly held ideas, attitudes and presumptions frame what
is politically possible and create the "window," a change in the
opinions held by politicians and the people in general will shift it.
Move the window of what is politically possible and those policies
previously impractical can become the next great popular and
legislative rage.
• The true influence of a think tank is shaping the political climate of
future legislative and legal debates by researching, educating,
involving and inspiring.

36
© Ben Birchall/ZUMA/Newscom
37
Cambridge Analytica scandal
• CA showed that it was possible to influence people’s behavior
through Big Data analytics, behavioral science, and hyper-targeted
ads
https://www.youtube.com/watch?v=n8Dd5aVXLCc
• Christopher Wylie, who worked for data firm CA, revealed how
personal information was taken without authorisation in early 2014
to build a system that could profile individual US voters in order to
target them with personalised political advertisements
https://www.youtube.com/watch?v=Q91nvbJSmS4
• Why Facebook's Data Scandal is a Big Deal
https://youtube.com/embed/a3W1I2_B6GA?start=0 &end=396

38
Ethical guidelines

39
Makkula Center for Applied Ethics
• Technical safety: does the technology work as intended. If it fails, what will
be the consequences?
• Transparency and privacy: Do we understand how the technology works
and what it is doing? The more someone or something is powerful, the
more transparent it should be.
• Malicious use & capacity for evil: If a technology has dual use, are we
confident in controlling it's use for evil? If not, should we not ban it?
• Beneficial use & capacity for good.
• Bias in data, policy and models: Biases have concrete negative
consequences for the people they discriminate against.
• Unemployment: There is concern that AI will exacerbate the loss of jobs
caused by automation, especially IT automation.
https://www.scu.edu/ethics/all-about-ethics/artificial-intelligence-and-ethics/
40
… Applied ethics
• Growing socio-economic inequality: Those who control AI will also likely rake in
much of the money that would have otherwise gone into the wages of the now-
unemployed, and therefore economic inequality will increase. It will hollow out
the middle-class, which traditionally have formed the core of the economy.
• Moral deskilling and debility: If we surrender our decision-making capacities to
machines, we will become less experienced at making decisions. if AI starts to
make ethical and political decisions for us, we will become worse at ethics and
politics.
• AI Personhood / “Robot Rights”: eventually artificial humans will become self-
conscious, attain their own volition. Do they deserve recognition as persons like
ourselves?
• Effects on the Human Spirit: Will AI affect how humans perceive themselves,
relate to each other, and live their lives? Will human beings become second-class
beings, if AIs become more intelligent than us? Or will some of us who are
deemed less intelligent be considered second-class?

41
SG Model AI Governance Framework
• Launched at the World Economic Forum in Davos in Jan 2019.
• Translates ethical principles into practical recommendations that
organisations could readily adopt to deploy AI responsibly
• Implementation and Self-Assessment Guide for Organisations
(“ISAGO”).
• The ISAGO complements the Model Framework by allowing organisations to
assess the alignment of their AI governance practices with the Model
Framework, while providing useful industry examples and practices

https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-
organisation/ai/sgmodelaigovframework2.pdf
42
SG mAIGovF guiding principles
• Organisations using AI in decision-making should ensure that the
decision-making process is explainable, transparent and fair.
• AI solutions should be human-centric
• the protection of the interests of human beings, including their well-being
and safety, should be the primary considerations in the design, development
and deployment of AI.

43
Ethics Guidelines for trustworthy AI
• On 8 April 2019, the EC High-Level Expert Group on AI presented
Ethics Guidelines for Trustworthy Artificial Intelligence
• Trustworthy AI should be:
• lawful - respecting all applicable laws and regulations
• ethical - respecting ethical principles and values
• robust - both from a technical perspective while taking into account its social
environment

44
Key requirements of trustworthy AI
• Human agency and oversight: proper oversight mechanisms need to
be ensured, which can be achieved through human-in-the-loop,
human-on-the-loop, and human-in-command approaches
• Technical Robustness and safety: AI systems need to be resilient and
secure; They need to be safe, accurate, reliable and reproducible.
Goal is to minimize/ prevent unintentional harm
• Privacy and data governance: full respect for privacy and data
protection; adequate data governance mechanisms; ensure quality
and integrity of the data; legitimate access to data

45
… requirements of trustworthy AI
• Transparency: systems must be transparent; decisions & actions
should be traceable; AI systems and their decisions should be
explainable to the stakeholder concerned. Disclosure of system’s
capabilities and limitations. Humans need to be aware that they are
interacting with an AI system.
• Diversity, non-discrimination and fairness: Unfair bias must be
avoided; including marginalization of vulnerable groups, exacerbation
of prejudice and discrimination prevented. Foster diversity. AI systems
should be accessible to all, regardless of any disability. Relevant
stakeholders should be involved throughout the system’s entire life
cycle.

46
… requirements of trustworthy AI
• Societal and environmental well-being: AI systems should benefit all
human beings, including future generations. Systems must be
sustainable and environmentally friendly. Should take into account
the environment, including other living beings, and their social and
societal impact should be carefully considered.
• Accountability: AI systems should be responsible and accountable for
and their outcomes. Algorithms, data and design processes must be
audited. Ensure adequate and accessible redress.

47
Rome Call for Ethical AI
• On Feb. 28 2020, representatives from the Pontifical Academy for Life,
Microsoft, IBM, FAO (UN Food and Agriculture Organization) and the Italian
government signed the "Rome Call for AI Ethics," a document developed to
support an ethical approach to artificial intelligence
• Three basic principles based on ethics, education and rights
• Ethics – All human beings are born free and equal in dignity and rights.
• Education – Transforming the world through the innovation of AI means undertaking
to build a future for and with younger generations.
• Rights – The development of AI in the service of humankind and the planet must be
reflected in regulations and principles that protect people – particularly the weak
and the underprivileged – and natural environments.
• Some people say that without implementation and enforcement, it won’t
make any difference.
https://www.cmswire.com/information-management/ibm-and-microsoft-sign-rome-call-for-ai-ethics-
what-happens-next/
https://romecall.org/
48
Algor-ethics
Ethical development of AI algorithms
• Algor-ethics approach
• Transparency: In principle, AI systems must be explainable.
• Inclusion: The needs of all human beings must be taken into consideration.
• Responsibility: Those who design and deploy the use of AI must proceed
with responsibility and transparency.
• Impartiality: To avoid creating or acting according to a bias.
• Reliability: AI systems must be able to work reliably.
• Security and privacy: AI systems must work securely and respect the privacy
of users.

49
Parting words

50
https://youtube.com/embed/40UbpSoYN4k?start=578&end=1043

Narrow AI vs AGI
• Narrow AI
• what we see today
• Focused on narrow and specific tasks
• Very useful even now, and getting better all the time!
• AGI: Artificial General Intelligence
• Next generation – AI with ‘common-sense’
• Ability to move freely in the world outside controlled conditions
and manipulate all manner of objects
• Apply to generalize and apply what it knows to novel situations
• Ability to self-learn and question its own knowledge-base
• Ability to develop empathy for ‘others’
• Ability to develop self-consciousness.
• Not any time soon!
• Technology is neutral(?)
• Even narrow AI will exacerbate the ills, divisions and injustice present today in the world
• Can we learn to use this technology for good rather than evil?
• What have we learnt from our experiences with other technology?
• Who should take the lead? The creators/ engineers? Users? Citizens?

Lessons from the AI mirror


https://youtube.com/embed/40UbpSoYN4k?start=578&end=1043
51
Edward Snowden
How we take back the internet

https://www.youtube.com/embed/yVwAodrjZMY?start=1565&end=1610

52
Lighter moments
• Robot fails
https://www.youtube.com/watch?v=k3GKGDng7k0
• Alec Baldwin / Donald Trump impression
https://www.youtube.com/watch?v=Omjy1LUNfIk
• SNL goes after Donald Trump & Mark Zuckerberg
https://www.youtube.com/watch?v=DwnAnzeH_rU

53
Media
• The rise of AI
https://www.youtube.com/watch?v=Dk7h22mRY • Shannon Vallor: Lessons from the AI Mirror
HQ https://www.youtube.com/watch?v=40UbpSoY
• AI: pros and cons N4k
https://www.youtube.com/watch?v=s0dMTAQM4 • The sad little fact
cw https://youtu.be/R1eZ3Wyn-ZI
• Neon:the artificial human • Life in a quantified society
https://www.youtube.com/watch?v=kWTwV8AUF https://www.opensocietyfoundations.org/explai
tg ners/life-quantified-society
https://www.youtube.com/watch?v=iAY2LxaFtaM
• Lyrebird can clone any voice
https://www.youtube.com/watch?v=VnFC-s2nOtI
• First look at AI camera systems
https://www.youtube.com/watch?v=a-CbKbLPRI0
• How Deep Fakes Will Make Fake News Worse
https://youtu.be/ZLYRb6VECbo
54
… media
• Privacy by design: An Engineering Ethics Perspective
https://youtu.be/gC0USgAawpg
• Elon Musk on AI
https://www.youtube.com/watch?v=H15uuDMqDK0
• Shoshana Zuboff: Surveillance Capitalism
https://www.youtube.com/watch?v=hIXhnWUmMvw
• Shoshana Zuboff: The age of surveillance capitalism
https://www.youtube.com/watch?v=QL4bz3QXWEo
• Cambridge Analytica Whistleblower, Brittany Kaiser
https://www.youtube.com/watch?v=AgBHfmf2JhQ
• Cambridge Analytica: Chris Wylie reveals grab of 50 mio
Facebook profiles
https://www.youtube.com/watch?v=zb6-xz-geH4
55
… media (optional)
• Enemy of the State
https://www.youtube.com/watch?v=ZjogdKObxrI
• Minority Report - The future can be seen
https://www.youtube.com/watch?v=lG7DGMgfOb8&start=8
• Westworld
https://www.youtube.com/watch?v=9BqKiZhEFFw
• Blade Runner 2049: Joi, K's virtual girlfriend
https://youtu.be/VqB-gGP6G9I
• Blade runner: Tears in the rain
https://www.youtube.com/watch?v=HU7Ga7qTLDU
• I Robot - We all have a purpose
https://www.youtube.com/watch?v=HBX2N3XPpNs
• I Robot - Do robots have self-consciousness
https://www.youtube.com/watch?v=05bGPiyM4jg
• I Robot – The Right Question
https://www.youtube.com/watch?v=ZKxr0wyIic4

56
Articles
• Fleddermann, C.B., Engineering Ethics, 4e, Ch. 7, Prentice Hall, 2012.
• Martin, M.W. and Schinzinger, R., Introduction to Engineering Ethics, 2e, Ch.8, McGraw-
Hill, 2010.
• Top 9 ethical issues of AI
https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/
• SG Model AI Governance Framework
https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf
• European Commission: Ethics Guidelines for Trustworthy AI
https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
• Rome Call for AI Ethics
http://www.academyforlife.va/content/dam/pav/documenti%20pdf/2020/CALL%2028%20febbraio/AI%20Rome%2
0Call%20x%20firma_DEF_DEF_.pdf
• EU’s New Privacy Rules
https://www.opensocietyfoundations.org/voices/eu-s-new-privacy-rules-are-only-first-step
• Privacy risks of the NHSX tracing app
https://tech.newstatesman.com/security/nhsx-contact-tracing-app-privacy-risks
• Watching the watchers
https://www.opensocietyfoundations.org/voices/q-and-a-watching-the-watchers-during-a-pandemic

57
Questions
1. If you look into a mirror and saw a dirty face, what should you do?
2. Are AI engineers professionals? If so, what code of ethics should they follow?
3. Should the CEO’s/CTO’s of AI tech firms be held accountable for the
consequences of their products?
4. It takes two hands to clap. Besides asking AI firms to be ethical and respect our
rights, what duties should we impose on ourselves regarding the use of AI
technology?
5. What is meant to be human? To be humane?
6. Should we accept the argument that what I do (good or bad) doesn’t make a
difference or change the outcome?
7. Covid-19 has forced a halt and reset to our way of doing things, and shown us a
new perspective of life in our cities, our neighbors and society. What happens
after the reset? What do we choose to be the new normal?

58
Questions (focus on artificial humans)
• Should the knowledge of robots be varied and limited, so that no single
robot or machine possesses all knowledge?
• Should robots have a limited life-span?
• Should the intelligence of robots be less than that of humans?
• Should robots be autonomous?
• Should robots develop free will?
• Should robots develop into sentient beings?
• Should artificial humans be kept distinct from humans?
• Should humans (or a mythical human) be promoted to artificial humans as
a supreme being?

59

You might also like