Professional Documents
Culture Documents
A Comprehensive Review of AI Myths and Misconceptions 1699516547
A Comprehensive Review of AI Myths and Misconceptions 1699516547
Abstract
A realistic understanding of artificial intelligence (AI) technology benefits everyone because
most of us interact with it on a daily basis. Promoting AI literacy allows more people to
participate in the discussion about the benefits and costs of AI technology, how and when
it should be used, and what we want our future with AI to look like in general. Myths and
misconceptions about AI can impede these debates and, at worst, lead to bad actions or
decisions. This should be avoided. To that end, this document clarifies common myths and
misconceptions about AI through simple explanations (additional remarks can be found in
the extended version [31]). I hope that a broad audience will find this resource useful.
Keywords: artificial intelligence, myths and misconceptions, education, public discourse,
AI literacy
1 Introduction
Driven by new advances, artificial intelligence (AI) has become one of the most talked about
topics in recent years. The hype is amplified by the media and popular culture, which often
portray AI as either a savior or a villain. As a result, many myths and misconceptions
about AI have emerged, making AI seem magical, unapproachable, and inscrutable to many.
However, it is important to develop realistic expectations for the ongoing AI transformation
in both industry and society. Buzz and mystery only make mismanagement and misuse more
likely. The best prevention is to increase general AI literacy, that is, individual knowledge
about AI technology. This document helps by debunking myths and providing a clear
understanding of what AI is, what it can do, and what it cannot (yet) do.
Many misconceptions about AI are due to the inherent vagueness of the term AI, which
leaves much room for guesswork and wishful thinking [14; 23]. As a result, the term AI is
used ambiguously in different contexts. To disentangle these contexts, it is helpful to first
define intelligence, which is a complex and multifaceted trait. Here, we adopt a maximally
broad definition of intelligence as goal-directed adaptive behavior, that is, the ability to
achieve complex goals [36; 38]. This definition abstracts from human intelligence. Thus, it
extends to the contexts in which the term AI is used:
field of research. AI is the name of the field dedicated to the study and creation
of intelligent machines.
technology. The entirety of AI techniques, algorithms, products, etc. is often re-
ferred to as AI, usually to discuss AI technology in general or to ascribe characteristics
to it (most myths in this document fall into this category).
1
machine capability. AI is used to describe the capability of a machine to demon-
strate intelligent behavior. Expanding on the definition of intelligence above, AI can
be defined as non-biological intelligence in this context [38].
particular system. The term AI is also used to refer to a particular system or agent
that has some kind of non-biological intelligence.
When discussing complex topics, a common understanding of terms is important to avoid
misunderstandings [32]. For our discussion of AI myths, this implies that we use additional
contextual qualifiers for the term AI whenever suitable. For example, we refer to a system
that uses AI technology as an AI system or an AI. There are other technical terms that
cannot be fully avoided. A glossary exists for assistance (see Section 8).
Debunking myths is not always straightforward. They may be inherently controversial,
they may be true in some cases but misrepresent AI technology as a whole, or they may
be sensitive to AI progress (the trajectory of which no one can predict with certainty). To
make it easier to file the truth content of myths in this collection, they are tagged with
appropriate attributes. In addition, they are organized into the following categories:
Each myth is briefly discussed and clarified. There is also an extended version available [31],
in which an additional selection of short narrative elements makes the complex realities of
AI tangible through examples, bridges them to common knowledge through analogies, and
condenses them into concise remarks. Overall, my goal is to dispel AI myths also on an
intuitive level, which I hope will be especially useful to non-technical readers.
The bottom line is that this document aims to enable different people and actors to have
more realistic expectations about AI technology. I believe this is much needed for a healthy
public discourse about a technology that has the potential to change our lives in unprece-
dented ways. In order to participate in this discourse, I hope that the comprehensive nature
of this document will generally stimulate curiosity and open up opportunities to better en-
gage with this important topic.
Disclaimer : I tried to make this review as comprehensive as possible, also drawing inspira-
tion from some great previous work on AI myths [2; 3; 4; 11; 14; 23; 29; 35]. Nevertheless,
this collection is unlikely to be exhaustive. Also, while I have done my best to reflect the
current perception of AI, I do expect the importance, prevalence, and possibly even the
validity of myths to change. For these reasons, I expect this document to evolve over time.
Feedback is always welcome.
2
Review: AI myths and misconceptions (Version: November 8, 2023)
Contents
1 Introduction 1
3
5 Myths - more imminent impact of AI technology 14
5.1 AI will only affect routine and manual jobs. . . . . . . . . . . . . . . . . . . 14
5.2 AI will take away our jobs and replace humans. . . . . . . . . . . . . . . . . 15
5.3 AI technology makes us stupid. . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.4 AI destroys our privacy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.5 AI cannot or should not be regulated. . . . . . . . . . . . . . . . . . . . . . 16
7 Conclusion 21
4
Review: AI myths and misconceptions (Version: November 8, 2023)
The field of artificial intelligence (AI) was formally founded during a workshop in Dart-
mouth in 1956. However, there was substantial disagreement regarding the name of the
field. Proponents of the term artificial intelligence emphasized its marketing appeal, which
competing suggestions like complex information processing could not offer. This is one of
reasons why the term AI ultimately prevailed. It is also the cause of much confusion about
the definition of AI. The confusion also manifests itself in various myths and misconceptions,
which I discuss in this section.
Myth 2.1:
Artificial intelligence, machine learning, and deep learning are the same.
Myth 2.2:
AI is whatever has not been done yet (the AI effect).
5
rather than what it has achieved. The AI effect results from the ever-changing landscape
of AI and a constant redefinition of what constitutes true artificial intelligence.
Myth 2.3:
AI is whatever is labeled AI.
Myth 2.4:
AI = shiny humanoid robots.
Myth 2.5:
AI is magic.
Classification: misleading.
Discussion/Reality: It is natural for people to be fascinated by new technologies like AI.
When someone refers to AI as magic, it may reflect a perceived inscrutability of AI. While
the statement is probably not meant literally in most cases, it can be misleading. People
6
Review: AI myths and misconceptions (Version: November 8, 2023)
might get the impression that AI can solve all problems effortlessly (see Myth 3.3) or without
human involvement or input (see Myths 4.3 and 4.5). This can lead to disappointment when
the reality is that AI still requires significant human efforts to function properly. Indeed,
AI systems are the result of a purely technological engineering process that usually involves
many humans.
Myth 2.6:
AI is all algorithms.
Myth 2.7:
AI is just superficial statistics.
However, much of the controversy embodied in this myth may stem from a different source:
The complex and philosophical question of whether an AI system actually understands what
it is processing. An AI system may mimic understanding based on its ability to efficiently
compress statistics about its training data. If, when, and how AI systems can come close
to human-like understanding is a matter of debate. It might even require some kind of
conscious awareness of meaning (see Myth 6.5). Ultimately, however, the practical value
of an AI system should depend on the outcomes it achieves, not on whether or not it has
human-like understanding.
7
3 Myths - AI capabilities and characteristics
The myths and misconceptions in this section result from the fact that AI capabilities are
both over- and underestimated. The considerations in this section will therefore lead to a
more realistic picture of AI capabilities.
Myth 3.1:
AI works perfectly (it is always accurate, fair, and unbiased).
Myth 3.2:
AI can predict the future.
Myth 3.3:
AI can be used anywhere and can solve any problem.
8
Review: AI myths and misconceptions (Version: November 8, 2023)
Myth 3.4:
AI automatically accounts for pre-established facts.
Myth 3.5:
AI lacks creativity.
Myth 3.6:
AI lacks empathy and emotional intelligence.
9
Myth 3.7:
AI is inherently good (or bad).
Myth 3.8:
AI algorithms work like the human brain.
Myth 3.9:
AI makes computers think.
10
Review: AI myths and misconceptions (Version: November 8, 2023)
associated with cognitive activities such as reasoning, problem solving, and decision mak-
ing. Other common associations with human thinking are that it is driven by intrinsic
motivations and intentions. In addition, consciousness is often considered essential to it.
Current AI systems are far from resembling human thought processes [14]. For example,
human thinking processes human-specific information from incoming stimuli (perception
and emotional responses) and internal mental representations (memory and acquired ex-
periences). At the same time, today’s most advanced AI systems fall short of human-like
understanding (see also Myth 2.7). They also most likely lack consciousness (see Myth 6.5).
This is not necessarily bad because the goal is usually not to make computers think like
humans. AI is not about creating human replicas. It is about leveraging technology to solve
complex challenges and enhance human potential.
Myth 3.10:
AI systems have agency.
Myth 4.1:
AI systems are easy to build and anyone can do it.
11
has not changed even though AI tools and frameworks have become more accessible and
user-friendly: Some AI development tools now enable easy drag-and-drop style development.
They offer predefined workflows, models, and algorithms [16]. However, it is still necessary
to understand the underlying principles: Otherwise, it is hard to make informed decisions
and troubleshoot problems as they arise. Insufficient awareness and careless use may lead
to untrustworthy applications and even to ethical problems.
Challenges in building AI systems.
(1) Gathering and preprocessing vast amounts of data (quality refinement, labeling, etc.).
(2) Integrating AI systems with existing infrastructure and workflows.
(3) Balancing model accuracy with time and computational resource constraints.
(4) Ensuring robustness and reliability in real-world scenarios.
(5) Overcoming the black-box problem: interpretable and explainable outcomes.
(6) Adapting to rapidly evolving research and technology.
(7) Addressing ethical considerations (for example, biases, data privacy, societal impact).
(8) Navigating legal and regulatory frameworks surrounding AI.
(9) Acquiring and retaining specialized talent in the field.
Myth 4.2:
To improve an AI system, just ’throw’ more data at it.
12
Review: AI myths and misconceptions (Version: November 8, 2023)
Myth 4.3:
AI systems learn autonomously and without human programming.
Myth 4.4:
AI systems automatically improve over time.
13
Myth 4.5:
AI systems operate without human intervention.
Myth 5.1:
AI will only affect routine and manual jobs.
14
Review: AI myths and misconceptions (Version: November 8, 2023)
Myth 5.2:
AI will take away our jobs and replace humans.
Myth 5.3:
AI technology makes us stupid.
Classification: controversial.
Discussion/Reality: This myth portrays a dystopian scenario in which machines take
over our daily mental tasks. Our brains are rendered thoughtless. Inherent knowledge
becomes obsolete by instant access through touch-screen interfaces. The essence of human
wisdom fades away and society gradually deteriorates in all aspects: physically, mentally,
and spiritually. It becomes an idiocracy [28]. In a contrasting utopian scenario, AI frees up
our time and mental energy by automating simple tasks. We keep the routines that we like.
Beyond, we are free to choose how much of our time we spend on creative and intellectual
endeavors. Thanks to AI technology, we can efficiently interact with existing information
and stimulate our imagination with AI-generated content. In this scenario, AI effectively
enhances our intelligence.
Our future trajectory is likely to lie somewhere between these two scenarios. We can
influence it by recognizing that we, as individuals, have choices regarding our use of AI
and other modern technologies. It is a question of our mindset whether we are in danger
of becoming too dependent on technology and losing our critical minds. Education has an
important role to play in this regard [8; 28; 41]. After all, the impact of AI technology
depends on how we integrate it into our lives.
Myth 5.4:
AI destroys our privacy.
15
notion. They are fueled by the ability of AI to mine the vast amounts of user-generated data
from digital services. Specialized AI algorithms can make increasingly accurate predictions
about individuals. These predictions are monetizable because they allow, for example,
targeted advertising and product recommendations. More and better data can improve
the underlying AI algorithms. Therefore, data is now driving entire business models for
companies like Google and Facebook. These companies offer their services seemingly for
free, but behind the scenes they collect data. Users pay for ”free” digital services with their
data, often with questionable consent. The user essentially becomes the product [27].
The use of personal data concerns informational privacy. However, by manipulating choices
and behavior, AI algorithms can also cause violations of decisional privacy (the right to make
free choices) and behavioral privacy (the right to act as one wishes). Overall, AI technology
clearly has the potential to interfere with individual privacy rights. Therefore, we should
establish reasonable boundaries (see Myth 5.5) [2]. Meanwhile, AI approaches are already
used to improve network security, where systems adapt to attacks and malware. This shows
that ultimately, the impact of AI on privacy is not inherently bad or good - it depends on
where and how AI is used (see Myth 3.7).
Myth 5.5:
AI cannot or should not be regulated.
16
Review: AI myths and misconceptions (Version: November 8, 2023)
(2) Conducting risk assessments, and regularly update them as technology evolves.
(3) Coordinating and forming agreement among a large base of international stakeholders
on common AI safety standards (ensures uniform impact on the competitive landscape).
(4) Establishing ethical guidelines and codes of conducts (will not be enough alone [23]).
(5) Developing AI systems with built-in safety measures [17].
(6) Establishing mechanisms for continuous monitoring of safety-relevant AI systems (for
example, audits and feedback loops).
This collection of myths would not be complete without a discussion of the highly speculative
and controversial claims about the future of AI. Expert opinions on these matters differ
widely. Clearly, no one knows what the future of AI will actually look like. Therefore,
long-term predictions should always be taken with some skepticism. It also means that,
for now, we need to keep an open mind and consider all possibilities. The purpose of this
section is therefore to introduce some interesting points and perspectives that may serve as
starting points for further exploration.
To prepare the discussions, let us define some terms. First, general intelligence describes
the ability to achieve virtually any goal (including learning). Artificial general intelligence
(AGI) refers to the ability of a non-biological system to accomplish any cognitive task at
least as well as humans (non-biological general intelligence). We use the term artificial
super intelligence (ASI) to refer to general intelligence that is far beyond human level.
Myth 6.1:
Artificial general intelligence/superintelligence is coming soon.
17
that allows it to tackle complicated challenges in various domains such as mathematics,
coding, vision, medicine, law, and even psychology. Given the breadth and depth of these
capabilities, some have concluded that we already have an early AGI version, albeit an
incomplete one [7].
This brings us to the question of the time scale. Here, once AGI is reached, it may only be
a brief moment to also reach ASI, especially if an intelligence explosion should take place
(see Myth 6.2). AI researchers vary widely in their estimates of when we will have the
first AGI/ASI systems. Some say that it will take only a few decades or even years, others
estimate centuries, and still others believe it will never happen. Time estimates always need
to be treated with caution. The problem with them is that there may be several peaks to
climb, but from where we are, we can only see the next one clearly.
Myth 6.2:
An intelligence explosion will cause a technological singularity.
First, optimization power will likely keep increasing. This is supported by several trends, in-
cluding improving hardware, the increasing availability of data that AI systems can absorb,
and a growing number of talented people that seek to improve AI. Applied optimization
power will also likely be high during a presumable transition to ASI. Initially, this could
be because humans try harder to improve a promising AI system. Later, if an AI sys-
tem should eventually become capable of designing further improvements itself, effort and
progress might accelerate to digital speeds [5].
The counterforce to optimization power is recalcitrance, that is, resistance and barriers to
progress. Some arguments point to a reduction in such barriers: For example, efforts to
improve general-purpose systems may be streamlined as people work less on task-specific
systems [5]. But there are also strong counter arguments, such as the need for testing [3].
Testing is necessary whenever a change is made to a system, otherwise one cannot be sure
that previously existing capabilities have not been lost: Without proper testing, a system
cannot be trusted. Next, it is not clear whether future dynamics will encourage project
teams to build AI systems with extensive self-improvement capabilities at all. In any case,
the possibility of an intelligence explosion will remain a live topic of debate.
18
Review: AI myths and misconceptions (Version: November 8, 2023)
Myth 6.3:
Artificial superintelligence will form a singleton (world government).
Myth 6.4:
Artificial superintelligence will cause human extinction.
19
The arrival of an AI powerful enough to make the previous considerations real may be a
long way off (see Myth 6.1). However, if there is even a small chance that there will be
such powerful AI, we would do well to prepare for it and influence the outcome in a positive
direction [5; 17; 29; 38]. On the other hand, while we cannot neglect the existential threats
posed by ASI, we should not exaggerate them either. Much effort should also be devoted
to current challenges that do not presuppose ASI. This includes the prevention of malicious
AI use, AI races, and safe design of autonomous AI agents (see Myth 5.5).
Myth 6.5:
AI will become conscious.
Myth 6.6:
AI will make us immortal.
20
Review: AI myths and misconceptions (Version: November 8, 2023)
that perform precise tasks at the cellular or molecular level [13]. In addition, a better under-
standing of the aging process could lead to treatments that slow down or reverse age-related
decline. Finally, advances in biotechnology and genetic engineering could lead to biological
enhancements that significantly improve human health and longevity.
The second possibility is digital immortality. It could be achieved by transforming someone’s
mind (consciousness, memories, etc.) into a digital form. This process is called mind
uploading. It would create a digital replica, thus granting theoretical immortality in a digital
state. Mind uploading should be feasible because minds are essentially specific arrangements
of atoms and neurons that can be computed and therefore simulated. However, there
are some essential technologies that have yet to be developed [5]: First, brains must be
reliably scanned, then the brain structure has to be reconstructed from the scans, and
finally, the mind has to be implemented as a simulation on a sufficiently powerful computer
(brain emulation). Taken together, it appears to be a long way to achieve mind uploading.
Powerful AI systems could help solve some of the remaining challenges.
The prospect of immortality is a complex philosophical and ethical challenge. In addition
to overcoming biological limitations, it raises existential questions that must be carefully
addressed. These include ethical concerns regarding the impact of immortality on overpop-
ulation, resource scarcity, and societal dynamics.
7 Conclusion
According to the historian Arthur Schlesinger, science and technology revolutionize our
lives, but memory, tradition, and myth frame our response. Indeed, it is quite clear that
AI technology will be increasingly integrated into our lives. The how of this integration
depends largely on our perception of AI technology, that is, on individual and cultural
backgrounds, myths, and popular conceptions and misconceptions. All these elements act
as anchoring points in our space of ideas [29]. Given the significant influence of these factors
on our future with AI, it seems reasonable to try to put them on a realistic footing.
I, like many others [4; 25; 28; 41], think that education has an important role to play in
this regard. Education can help to promote AI literacy and awareness, reduce anxiety,
and address misconceptions. This collection and discussion of AI myths is written with
these goals in mind. I was strongly motivated by the prospect of contributing to a more
constructive and inclusive dialogue between different stakeholders, including researchers,
policy makers, industry experts, ethicists, and community representatives.
Despite all the promises that AI technology holds for society, we must remain aware of its
subtle but significant costs. These include resource consumption and the fact that some
people may feel left behind (see Myth 5.2). More generally, it means that we need to devote
resources to solving current challenges, such as preventing AI misuse (see Myth 5.5). It
also means that we need to take a long-term view and look for global solutions. The race
is on for better AI technology and artificial general intelligence. At this point, no one can
be sure where it will take us. Nevertheless, we should work together to figure out what we
want the AI transformation to look like. It will give us more control over the outcome. So
let’s keep talking and shape the AI transformation for the best.
21
References
[1] Andrea Agostinelli, Timo I Denk, Zalán Borsos, Jesse Engel, Mauro Verzetti, Antoine
Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, Matt Shar-
ifi, Neil Zeghidour, and Christian Frank. Musiclm: Generating music from text. arXiv
preprint arXiv:2301.11325, 2023. (Cited on pages 9 and 14.)
[2] Robert D Atkinson. ’It’s going to kill us!’ and other myths about the future of
artificial intelligence. Information Technology & Innovation Foundation, 2016. (Cited
on pages 2, 8, 15, and 16.)
[3] Peter Bentley. The three laws of artificial intelligence: Dispelling common myths.
Should we fear artificial intelligence, pages 6–12, 2018. (Cited on pages 2 and 18.)
[4] Arne Bewersdorff, Xiaoming Zhai, Jessica Roberts, and Claudia Nerdel. Myths, mis-
and preconceptions of artificial intelligence: A review of the literature. Computers and
Education: Artificial Intelligence, page 100143, 2023. (Cited on pages 2 and 21.)
[5] Nick Bostrom. Superintelligence. Dunod, 2017. (Cited on pages 18, 19, 20, and 21.)
[7] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric
Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al.
Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv preprint
arXiv:2303.12712, 2023. (Cited on pages 9, 14, and 18.)
[8] Nicholas Carr. Is Google making us stupid? Teachers College Record, 110(14):89–94,
2008. (Cited on page 15.)
[10] Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, and Mubarak Shah. Diffu-
sion models in vision: A survey. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 2023. (Cited on pages 9 and 14.)
[11] Constance de Saint Laurent. In defence of machine learning: debunking the myths
of artificial intelligence. Europe’s journal of psychology, 14(4):734, 2018. (Cited on
page 2.)
[12] Ian J Deary. Intelligence: A very short introduction, volume 39. Oxford University
Press, USA, 2020. (Cited on page 27.)
[13] Eric Drexler. Engines of creation: the coming era of nanotechnology. Anchor, 1987.
(Cited on page 21.)
22
Review: AI myths and misconceptions (Version: November 8, 2023)
[14] Frank Emmert-Streib, Olli Yli-Harja, and Matthias Dehmer. Artificial intelligence:
A clarification of misconceptions, myths and desired status. Frontiers in artificial
intelligence, 3:524339, 2020. (Cited on pages 1, 2, 10, and 11.)
[15] Irving John Good. Speculations concerning the first ultraintelligent machine. In Ad-
vances in computers, volume 6, pages 31–88. Elsevier, 1966. (Cited on page 18.)
[16] Xin He, Kaiyong Zhao, and Xiaowen Chu. AutoML: A survey of the state-of-the-art.
Knowledge-Based Systems, 212:106622, 2021. (Cited on pages 12 and 13.)
[17] Dan Hendrycks, Mantas Mazeika, and Thomas Woodside. An Overview of Catastrophic
AI Risks. arXiv preprint arXiv:2306.12001, 2023. (Cited on pages 10, 16, 17, and 20.)
[18] Steven CH Hoi, Doyen Sahoo, Jing Lu, and Peilin Zhao. Online learning: A compre-
hensive survey. Neurocomputing, 459:249–289, 2021. (Cited on page 13.)
[19] Deborah G Johnson and Mario Verdicchio. Reframing AI discourse. Minds and Ma-
chines, 27:575–590, 2017. (Cited on page 11.)
[20] Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan
Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, et al. Dyn-
abench: Rethinking benchmarking in NLP. arXiv preprint arXiv:2104.14337, 2021.
(Cited on page 17.)
[21] Seo Young Kim, Bernd H Schmitt, and Nadia M Thalmann. Eliza in the uncanny
valley: Anthropomorphizing consumer robots increases their perceived warmth but
decreases liking. Marketing letters, 30:1–12, 2019. (Cited on page 10.)
[22] Shane Legg and Marcus Hutter. Universal intelligence: A definition of machine intel-
ligence. Minds and machines, 17:391–444, 2007. (Cited on page 27.)
[23] Daniel Leufer. Why we need to bust some myths about AI. Patterns, 1(7):100124,
2020. (Cited on pages 1, 2, 11, and 17.)
[24] Daniel Leufer, Alexandra Steinbrück, Zuzana Liptakova, Kathryn Mueller, and Rachel
Jang. AI Myths. https://www.aimyths.org/, 2020. Accessed: 2023-07-01. (Cited on
page 16.)
[25] Duri Long and Brian Magerko. What is AI literacy? Competencies and design consid-
erations. In Proceedings of the 2020 CHI conference on human factors in computing
systems, pages 1–16, 2020. (Cited on pages 16 and 21.)
[26] Carl Macrae. Learning from the failure of autonomous and intelligent systems: Acci-
dents, safety, and sociotechnical sources of risk. Risk analysis, 42(9):1999–2025, 2022.
(Cited on page 10.)
[27] Karl Manheim and Lyric Kaplan. Artificial intelligence: Risks to privacy and democ-
racy. Yale JL & Tech., 21:106, 2019. (Cited on page 16.)
23
[28] Janusz Morbitzer. Into Idiocracy–pedagogical reflection on the epidemic of stupidity
in the generation of the internet era. Zeszyty Naukowe Wyższej Szkoly Humanitas.
Pedagogika, (17):125–137, 2018. (Cited on pages 15 and 21.)
[29] Roberto Musa Giuliano. Echoes of myth and magic in the language of artificial intel-
ligence. AI & society, 35(4):1009–1024, 2020. (Cited on pages 2, 20, and 21.)
[30] Ulric Neisser, Gwyneth Boodoo, Thomas J Bouchard Jr, A Wade Boykin, Nathan
Brody, Stephen J Ceci, Diane F Halpern, John C Loehlin, Robert Perloff, Robert J
Sternberg, and Susana Urbina. Intelligence: knowns and unknowns. American psy-
chologist, 51(2):77, 1996. (Cited on page 27.)
[33] Inioluwa Deborah Raji, Timnit Gebru, Margaret Mitchell, Joy Buolamwini, Joonseok
Lee, and Emily Denton. Saving face: Investigating the ethical concerns of facial recog-
nition auditing. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and
Society, pages 145–151, 2020. (Cited on page 10.)
[34] Ulrike Reisach. The responsibility of social media in times of societal and political
manipulation. European Journal of Operational Research, 291(3):906–917, 2021. (Cited
on page 10.)
[35] Jonathan Roberge, Marius Senneville, and Kevin Morin. How to translate artificial
intelligence? Myths and justifications in public discourse. Big Data & Society, 7(1):
2053951720919968, 2020. (Cited on pages 2 and 16.)
[37] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT
press, 2018. (Cited on pages 10 and 13.)
[38] Max Tegmark. Life 3.0: Being human in the age of artificial intelligence. Vintage,
2018. (Cited on pages 1, 2, 17, 19, 20, and 26.)
[39] Giulio Tononi and Christof Koch. Consciousness: here, there and everywhere? Philo-
sophical Transactions of the Royal Society B: Biological Sciences, 370(1668):20140167,
2015. (Cited on page 20.)
[40] Anthony Zador, Blake Richards, Bence Ölveczky, Sean Escola, Yoshua Bengio,
Kwabena Boahen, Matthew Botvinick, Dmitri Chklovskii, Anne Churchland, Claudia
Clopath, et al. Toward next-generation artificial intelligence: catalyzing the NeuroAI
revolution. arXiv preprint arXiv:2210.08340, 2022. (Cited on page 10.)
24
Review: AI myths and misconceptions (Version: November 8, 2023)
[41] Brahim Zarouali, Natali Helberger, and Claes H De Vreese. Investigating algorith-
mic misconceptions in a media context: Source of a new digital divide? Media and
Communication, 9(4):134–144, 2021. (Cited on pages 15 and 21.)
AI agent. AI systems that pursue goals more or less autonomously by determining and
performing subtasks and actions on their own.
AI effect. The phenomenon that once a task is successfully performed using AI technology,
it is often no longer referred to as AI (see Myth 2.2).
AI model. A specific mathematical function that computes predictions/outputs from given
data/inputs. Different AI models can have the same architecture and differ just in the values
of their model parameters.
Algorithm. Step-by-step procedures for solving problems or performing computations (see
Myth 2.6).
Anthropomorphism. The attribution of human-like characteristics and behaviors to
machines/AI (more generally, to all kinds of non-human objects and entities).
(Model) architecture. The structure of the mathematical function that underlies an AI
model.
Artificial intelligence (AI). Used in different contexts (see the Introduction): refers to
a field of research, but also to a machine capability (non-biological intelligence).
Artificial general intelligence (AGI). The ability to accomplish any cognitive task at
least as well as humans (see Myth 6.1).
Artificial neuronal network. A model architecture consisting of artificial neurons that
use their inputs (from incoming connections) to compute outputs (for outgoing connections).
Artificial superintelligence (ASI). General intelligence far beyond human level (see
Myth 6.1).
Black-box problem. Refers to the inscrutability of AI models, which can cause trust
issues and outcomes that are not interpretable (see also explainable AI).
Complex information processing. An alternative name for the field of AI.
Consciousness. Having subjective experience (see Myth 6.5).
Deep Learning. A subset of machine learning that encompasses algorithms that im-
plement large artificial neural networks networks and adapt them by learning from vast
amounts of data (see Myth 2.1).
Evaluation. The process of testing how well an AI model works. Synonyms: testing.
Explainable AI. A property of an AI system that allows humans to intellectually verify
outcomes/predictions (also refers to the methods to achieve this).
25
Foundational model. A general-purpose AI model that can perform much more than
just a single specific task.
General intelligence. The ability to achieve virtually any goal, including the ability to
learn [38].
Generative AI. A class of AI techniques aimed at creating novel content, for example,
images, videos, and text.
Labeling. A step that is sometimes necessary during the preparation of training data.
Synonyms: annotating.
Intelligence. Goal-directed adaptive behavior (but alternative definitions exist, see Sec-
tion 9).
Intelligence explosion. Recursive self-improvement of an AI system that rapidly leading
to artificial superintelligence. Implies a technological singularity (see Myth 6.2).
Machine learning. A subset of AI technology that encompasses algorithms that learn
from data/experience, enabling predictions and data-driven decisions (see Myth 2.1).
Mind uploading. The process of transforming someone’s mind (consciousness, memories,
etc.) into a digital form (see Myth 6.6).
Moore’s law. The observation that computing power (more specifically, the number of
transistors in an integrated circuit) doubles about every two years.
Narrow AI. A task-specialized AI system with limited capabilities.
Large language models. A relatively new class of foundational AI models that are very
capable at translating between languages. They power many modern AI chatbots.
Learning. See training.
(AI model) parameters. The free variables of an AI model whose values are adapted to
the training data during the learning phase.
Reinforcement learning. A learning paradigm in which AI agents adjust their behavior
based on reward signals which they receive for their actions.
Representation. A fundamental concept in AI that involves creating models and struc-
tures to represent information and knowledge such that intelligent systems can use it.
(Classic) statistics. The branch of mathematics that deals with the collection, analysis,
and interpretation of data.
(Technological) Singularity. A hypothetical future point in time when technological
growth becomes uncontrollable and irreversible, leading to unpredictable changes in human
civilization (see Myth 6.2).
(Human) thinking. A term to summarize human cognitive activities such as reasoning,
problem solving, and decision making (see Myth 3.9).
Training. The process of adapting a model (for example, the parameters of a neural
network architecture) to the training data. Synonyms: learning, teaching, optimization.
Training data. The data that an AI system learns from (in a machine-learning setting).
Transfer learning. A method to reduce the training effort for a new task. It reuses the
model parameters from an existing AI model for a different (but similar) task.
26
Review: AI myths and misconceptions (Version: November 8, 2023)
Myth 9.1:
Human intelligence only resides in the brain.
Classification: controversial.
Discussion/Reality: Human intelligence is often associated with the brain and cognitive
abilities, such as learning, reasoning, and problem solving. However, human intelligence is
also influenced by genetics and environmental factors (see Myth 9.2). Intelligence can also
be understood as a joint function of the mind (brain), sensorimotor modalities (perceptual
and motoric abilities), and environment (constraints on the space of possible actions) [9].
Myth 9.2:
Human intelligence is solely determined by genetics and thus immutable.
Myth 9.3:
A high intelligence quotient (IQ) guarantees success in life.
27