Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Review: AI myths and misconceptions (Version: November 8, 2023)

A Comprehensive Review of AI Myths and Misconceptions


(Short Version)

Frank G Nussbaum frank@fgnussbaum.com

Abstract
A realistic understanding of artificial intelligence (AI) technology benefits everyone because
most of us interact with it on a daily basis. Promoting AI literacy allows more people to
participate in the discussion about the benefits and costs of AI technology, how and when
it should be used, and what we want our future with AI to look like in general. Myths and
misconceptions about AI can impede these debates and, at worst, lead to bad actions or
decisions. This should be avoided. To that end, this document clarifies common myths and
misconceptions about AI through simple explanations (additional remarks can be found in
the extended version [31]). I hope that a broad audience will find this resource useful.
Keywords: artificial intelligence, myths and misconceptions, education, public discourse,
AI literacy

1 Introduction
Driven by new advances, artificial intelligence (AI) has become one of the most talked about
topics in recent years. The hype is amplified by the media and popular culture, which often
portray AI as either a savior or a villain. As a result, many myths and misconceptions
about AI have emerged, making AI seem magical, unapproachable, and inscrutable to many.
However, it is important to develop realistic expectations for the ongoing AI transformation
in both industry and society. Buzz and mystery only make mismanagement and misuse more
likely. The best prevention is to increase general AI literacy, that is, individual knowledge
about AI technology. This document helps by debunking myths and providing a clear
understanding of what AI is, what it can do, and what it cannot (yet) do.
Many misconceptions about AI are due to the inherent vagueness of the term AI, which
leaves much room for guesswork and wishful thinking [14; 23]. As a result, the term AI is
used ambiguously in different contexts. To disentangle these contexts, it is helpful to first
define intelligence, which is a complex and multifaceted trait. Here, we adopt a maximally
broad definition of intelligence as goal-directed adaptive behavior, that is, the ability to
achieve complex goals [36; 38]. This definition abstracts from human intelligence. Thus, it
extends to the contexts in which the term AI is used:
ˆ field of research. AI is the name of the field dedicated to the study and creation
of intelligent machines.
ˆ technology. The entirety of AI techniques, algorithms, products, etc. is often re-
ferred to as AI, usually to discuss AI technology in general or to ascribe characteristics
to it (most myths in this document fall into this category).

1
ˆ machine capability. AI is used to describe the capability of a machine to demon-
strate intelligent behavior. Expanding on the definition of intelligence above, AI can
be defined as non-biological intelligence in this context [38].
ˆ particular system. The term AI is also used to refer to a particular system or agent
that has some kind of non-biological intelligence.
When discussing complex topics, a common understanding of terms is important to avoid
misunderstandings [32]. For our discussion of AI myths, this implies that we use additional
contextual qualifiers for the term AI whenever suitable. For example, we refer to a system
that uses AI technology as an AI system or an AI. There are other technical terms that
cannot be fully avoided. A glossary exists for assistance (see Section 8).
Debunking myths is not always straightforward. They may be inherently controversial,
they may be true in some cases but misrepresent AI technology as a whole, or they may
be sensitive to AI progress (the trajectory of which no one can predict with certainty). To
make it easier to file the truth content of myths in this collection, they are tagged with
appropriate attributes. In addition, they are organized into the following categories:

the scope and definition of AI (Section 2),


AI capabilities and characteristics (Section 3),
Myths related to... the design and operation of AI systems (Section 4),
the more imminent impact of AI technology (Section 5), and
speculations about the future of AI (Section 6).

Each myth is briefly discussed and clarified. There is also an extended version available [31],
in which an additional selection of short narrative elements makes the complex realities of
AI tangible through examples, bridges them to common knowledge through analogies, and
condenses them into concise remarks. Overall, my goal is to dispel AI myths also on an
intuitive level, which I hope will be especially useful to non-technical readers.
The bottom line is that this document aims to enable different people and actors to have
more realistic expectations about AI technology. I believe this is much needed for a healthy
public discourse about a technology that has the potential to change our lives in unprece-
dented ways. In order to participate in this discourse, I hope that the comprehensive nature
of this document will generally stimulate curiosity and open up opportunities to better en-
gage with this important topic.
Disclaimer : I tried to make this review as comprehensive as possible, also drawing inspira-
tion from some great previous work on AI myths [2; 3; 4; 11; 14; 23; 29; 35]. Nevertheless,
this collection is unlikely to be exhaustive. Also, while I have done my best to reflect the
current perception of AI, I do expect the importance, prevalence, and possibly even the
validity of myths to change. For these reasons, I expect this document to evolve over time.
Feedback is always welcome.

2
Review: AI myths and misconceptions (Version: November 8, 2023)

Contents

1 Introduction 1

2 Myths - scope and definition of artificial intelligence 5


2.1 Artificial intelligence, machine learning, and deep learning are the same. . . 5
2.2 AI is whatever has not been done yet (the AI effect). . . . . . . . . . . . . . 5
2.3 AI is whatever is labeled AI. . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 AI = shiny humanoid robots. . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.5 AI is magic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.6 AI is all algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.7 AI is just superficial statistics. . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Myths - AI capabilities and characteristics 8


3.1 AI works perfectly (it is always accurate, fair, and unbiased). . . . . . . . . 8
3.2 AI can predict the future. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 AI can be used anywhere and can solve any problem. . . . . . . . . . . . . . 8
3.4 AI automatically accounts for pre-established facts. . . . . . . . . . . . . . . 9
3.5 AI lacks creativity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.6 AI lacks empathy and emotional intelligence. . . . . . . . . . . . . . . . . . 9
3.7 AI is inherently good (or bad). . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.8 AI algorithms work like the human brain. . . . . . . . . . . . . . . . . . . . 10
3.9 AI makes computers think. . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.10 AI systems have agency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4 Myths - AI system design and operation 11


4.1 AI systems are easy to build and anyone can do it. . . . . . . . . . . . . . . 11
4.2 To improve an AI system, just ’throw’ more data at it. . . . . . . . . . . . . 12
4.3 AI systems learn autonomously and without human programming. . . . . . 13
4.4 AI systems automatically improve over time. . . . . . . . . . . . . . . . . . 13
4.5 AI systems operate without human intervention. . . . . . . . . . . . . . . . 14

3
5 Myths - more imminent impact of AI technology 14
5.1 AI will only affect routine and manual jobs. . . . . . . . . . . . . . . . . . . 14
5.2 AI will take away our jobs and replace humans. . . . . . . . . . . . . . . . . 15
5.3 AI technology makes us stupid. . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.4 AI destroys our privacy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.5 AI cannot or should not be regulated. . . . . . . . . . . . . . . . . . . . . . 16

6 Outlook - speculations about the future of AI 17


6.1 Artificial general intelligence/superintelligence is coming soon. . . . . . . . . 17
6.2 An intelligence explosion will cause a technological singularity. . . . . . . . 18
6.3 Artificial superintelligence will form a singleton (world government). . . . . 19
6.4 Artificial superintelligence will cause human extinction. . . . . . . . . . . . 19
6.5 AI will become conscious. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.6 AI will make us immortal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

7 Conclusion 21

8 Glossary of terms and how they are used 25

9 Appendix: Myths about human intelligence 27


9.1 Human intelligence only resides in the brain. . . . . . . . . . . . . . . . . . 27
9.2 Human intelligence is solely determined by genetics and thus immutable. . . 27
9.3 A high intelligence quotient (IQ) guarantees success in life. . . . . . . . . . 27

4
Review: AI myths and misconceptions (Version: November 8, 2023)

2 Myths - scope and definition of artificial intelligence

The field of artificial intelligence (AI) was formally founded during a workshop in Dart-
mouth in 1956. However, there was substantial disagreement regarding the name of the
field. Proponents of the term artificial intelligence emphasized its marketing appeal, which
competing suggestions like complex information processing could not offer. This is one of
reasons why the term AI ultimately prevailed. It is also the cause of much confusion about
the definition of AI. The confusion also manifests itself in various myths and misconceptions,
which I discuss in this section.

Myth 2.1:
Artificial intelligence, machine learning, and deep learning are the same.

Classification: factually incorrect, misleading.


Discussion/Reality: This myth implies that the terms are often used interchangeably in
practice. However, while all three concepts are related to the idea of machines performing
tasks that require intelligence, they involve different techniques:
ˆ Artificial intelligence (AI) is the entire technological field aimed at simulating intelli-
gent behavior in machines.
ˆ Machine learning (ML) is a subset of AI in which machines learn from data/experience.
In a nutshell, machine learning is always based on a mathematical function that
computes some prediction from given input data. This mathematical function is
flexible in the sense that it has parameters that are adapted to the data during the
learning phase.
ˆ Deep learning (DL) in turn is a subset of machine learning in which artificial neural
networks are used as mathematical functions. They consist of artificial neurons that
have many connections. The strengths of these connections are usually subject to
learning. Because there are many connections, deep learning often requires large
amounts of data to learn the connection strengths and other parameters reliably.
Deep learning is common in most of today’s most powerful AI systems. However, for some
problems, insufficient data limits the use of deep learning. Therefore, smaller models and
techniques from classical machine learning are still important.

Myth 2.2:
AI is whatever has not been done yet (the AI effect).

Classification: controversial, progress-sensitive.


Discussion/Reality: The AI effect is the phenomenon that once a task is successfully per-
formed by AI technology and the corresponding capability moves into general applications,
people no longer perceive it as AI. The task becomes less impressive, and we tend to take
the new capability for granted. In other words, AI is often defined by what it cannot do,

5
rather than what it has achieved. The AI effect results from the ever-changing landscape
of AI and a constant redefinition of what constitutes true artificial intelligence.

Myth 2.3:
AI is whatever is labeled AI.

Classification: factually incorrect.


Discussion/Reality: This myth acts as a counterpart to the AI effect. In reality, not
everything that is called AI actually is AI. This may be due to a lack of understanding on
the part of those who use the term. However, because of its marketing appeal, AI is also
used as a label to sell a wide range of applications and products. For example, processes that
operate on fixed decision rules are not very intelligent and yet are sometimes labeled as AI.
Task automation often falls into this category because it involves the execution of predefined
workflows, such as robots performing repetitive actions or the automatic generation of email
notifications. Another example is classic statistics. It deals with the collection, analysis,
and interpretation of data. However, it does not involve learning or adaptation, so it cannot
be considered AI (compare Myth 2.7). Mislabeled products may still serve their purpose,
but the problem is that they can create unrealistic expectations.

Myth 2.4:
AI = shiny humanoid robots.

Classification: factually incorrect, misleading.


Discussion/Reality: This myth suggests that AI is synonymous with human-like robots
that can walk, talk, and think like humans. This is a misconception that has been pop-
ularized by the media and science fiction movies (for example, Terminator, I Robot, Ex
Machina, or the TV series Westworld). We are a species that tends to see human traits
in everything. This is a phenomenon known as anthropomorphism. In reality, many AI
systems work unseen in the background. While humanoid robots are one application of AI,
they are not the most common form of AI technology. Further, some robots only automate
simple physical processes following a set of sequential instructions. Such robots do not use
AI technology at all.

Myth 2.5:
AI is magic.

Classification: misleading.
Discussion/Reality: It is natural for people to be fascinated by new technologies like AI.
When someone refers to AI as magic, it may reflect a perceived inscrutability of AI. While
the statement is probably not meant literally in most cases, it can be misleading. People

6
Review: AI myths and misconceptions (Version: November 8, 2023)

might get the impression that AI can solve all problems effortlessly (see Myth 3.3) or without
human involvement or input (see Myths 4.3 and 4.5). This can lead to disappointment when
the reality is that AI still requires significant human efforts to function properly. Indeed,
AI systems are the result of a purely technological engineering process that usually involves
many humans.

Myth 2.6:
AI is all algorithms.

Classification: factually incorrect, misleading.

Discussion/Reality: Algorithms are step-by-step procedures for solving problems or per-


forming computations. They are fundamental components of AI systems. For example,
specific algorithms are used to prepare data, to adapt the parameters of an AI model, or
to monitor an AI system. After all, AI models are algorithms themselves: They compute
certain outputs from given input data (compare Myth 2.1). Nevertheless, reducing AI to
just algorithms is overly simplistic. AI is a multidisciplinary field in which the quality and
quantity of data are at least equally important. Many products that involve AI also require
the design of appropriate user interfaces. Finally, the field of AI research intersects with
many other fields and their techniques (for example, neuroscience).

Myth 2.7:
AI is just superficial statistics.

Classification: controversial, misrepresentative, misleading.

Discussion/Reality: Statistics is the mathematical discipline concerned with the collec-


tion, organization, analysis, interpretation, and presentation of data. It is one of the early
building blocks of the field of AI. However, AI encompasses a much broader set of tech-
niques, including learning and adaptation. This allows AI to be useful in domains where
simple classic statistics may fall short (for example, computer vision, natural language pro-
cessing, and reasoning). Thus, reducing AI to classic statistics does not do justice to the
complexity of AI technology.

However, much of the controversy embodied in this myth may stem from a different source:
The complex and philosophical question of whether an AI system actually understands what
it is processing. An AI system may mimic understanding based on its ability to efficiently
compress statistics about its training data. If, when, and how AI systems can come close
to human-like understanding is a matter of debate. It might even require some kind of
conscious awareness of meaning (see Myth 6.5). Ultimately, however, the practical value
of an AI system should depend on the outcomes it achieves, not on whether or not it has
human-like understanding.

7
3 Myths - AI capabilities and characteristics
The myths and misconceptions in this section result from the fact that AI capabilities are
both over- and underestimated. The considerations in this section will therefore lead to a
more realistic picture of AI capabilities.

Myth 3.1:
AI works perfectly (it is always accurate, fair, and unbiased).

Classification: factually incorrect, misleading.


Discussion/Reality: AI systems can be very accurate, but they are not infallible. For ex-
ample, they can be affected by errors and biases in their training data and algorithms. As a
consequence, AI systems must be rigorously tested to ensure that they perform morally and
as intended. When they do, they are often more accurate and less biased than humans [2].
Nevertheless, users should not be overconfident about AI predictions in practice.

Myth 3.2:
AI can predict the future.

Classification: misrepresentative, misleading.


Discussion/Reality: In certain scenarios, AI algorithms can accurately predict future
events based on patterns in historical data. For example, they may identify trends or
dependencies among variables. However, any AI algorithm that attempts predictions of the
future has several key limitations: (1) the data it has been trained on during development,
(2) implicit and explicit assumptions made by developers, and (3) general uncertainty due
to unpredictable factors and variables. Consequently, no AI system can perfectly foresee
the future (see also Myth 3.1). Nevertheless, AI predictions can provide valuable foresight.
They can aid in decision making when interpreted with a cautious and critical mindset.

Myth 3.3:
AI can be used anywhere and can solve any problem.

Classification: factually incorrect, misleading, progress-sensitive.


Discussion/Reality: There are attempts to solve almost every problem with AI today.
Not all of them are successful. Therefore, it is important to understand the limitations of AI
technology [6]. AI algorithms can be susceptible to small changes in the task or input data,
they can amplify biases from the training data, and they can consume enormous resources
(energy, computing hardware, and so on). In addition, AI cannot always adequately replace
human intelligence and judgment.
Given these limitations, the development of AI solutions should follow a holistic assessment
of limitations and potential impacts on individuals, society, and the environment. There

8
Review: AI myths and misconceptions (Version: November 8, 2023)

are usually alternatives if AI approaches do not seem appropriate. However, if AI is not


suitable for a problem today, it may be tomorrow. The boundaries of AI technology are
constantly changing. For example, we are already seeing progress toward general-purpose
AI [7]. Perhaps, this myth may eventually become true (see Myth 6.1).

Myth 3.4:
AI automatically accounts for pre-established facts.

Classification: factually incorrect, misrepresentative, misleading.


Discussion/Reality: AI systems do not automatically comprehend things that seems ob-
vious to humans. For example, they may have trouble with contextual and causal un-
derstanding, common sense, and the laws of physics. Generally, they may have difficulty
recognizing or inferring ”obvious” information that goes beyond the inherent knowledge of
their training data. To mitigate this limitation, AI systems need to be trained with suitable
algorithms and relevant data.

Myth 3.5:
AI lacks creativity.

Classification: controversial, misrepresentative.


Discussion/Reality: Creativity has traditionally been considered an exclusively human
trait. The ability of AI algorithms to process and analyze massive amounts of information
has changed that. Advanced AI systems no longer simply replicate their training data.
They can recombine learned patterns and information. This often results in unique outputs
that transcend conventional thinking. For example, today’s generative AI systems can
compose high-quality music [1], poetry [7], and visual artworks [10]. AI is now pushing the
boundaries of creative work, inspiring us to redefine our understanding of what it means to
be original and inventive.

Myth 3.6:
AI lacks empathy and emotional intelligence.

Classification: controversial, misrepresentative, progress-sensitive.


Discussion/Reality: AI systems do not experience emotions like humans. However, they
can be designed to recognize and respond to emotional cues similar to how a human might
respond. This can be considered a form of empathy or emotional intelligence. However,
emotional intelligence is also about navigating the complex intricacies of social interactions.
This remains a challenge for most AI systems. However, AI systems can already assist, and
they are getting better as progress is made.

9
Myth 3.7:
AI is inherently good (or bad).

Classification: factually incorrect, misleading.


Discussion/Reality: AI technology comprises a landscape of tools. As such, it is neutral
and has no inherent ethical or moral value. It is the way in which AI is used by humans
that can be considered good or bad. Therefore, any developer or provider of an AI system
should consider the ethical implications of its development and use.
Examples for the good-bad dichotomy of AI.
(1) facial recognition. Facial recognition technology can be used for security and law
enforcement purposes, but also enables mass surveillance in undesirable contexts [33].
(2) autonomous driving. Autonomous cars can save lives by reducing human error, but
they can also cause accidents due to programming errors or inability to deal with
unforeseen circumstances [26].
(3) information feed. AI algorithms in social media can quickly disseminate relevant
information and identify hate speech. But, they can also create information silos that
reinforce human confirmation bias. More generally, they can be misused to enable
propaganda and widespread misinformation [17; 34].

Myth 3.8:
AI algorithms work like the human brain.

Classification: factually incorrect, misleading.


Discussion/Reality: The human brain relies on learning and reasoning processes that
simple mathematical algorithms cannot replicate (see also Myth 3.9). The perception that
AI algorithms work like the human brain is an example of anthropomorphism, particularly
the ELIZA effect [21]: the tendency to unconsciously assume that computers behave anal-
ogously to humans. Nevertheless, AI technology has drawn inspiration from the human
brain. Examples include the neurons in artificial neural networks, hierarchical network
structures, and reinforcement learning [37]. Even now, there may be much more to learn
from biological brains [40].

Myth 3.9:
AI makes computers think.

Classification: controversial, misleading.


Discussion/Reality: This myth is closely related to Myth 3.8: Most would consider think-
ing to be a fundamental process in the human brain. Unfortunately though, there is cur-
rently no universally accepted definition of human thinking [14]. However, it is generally

10
Review: AI myths and misconceptions (Version: November 8, 2023)

associated with cognitive activities such as reasoning, problem solving, and decision mak-
ing. Other common associations with human thinking are that it is driven by intrinsic
motivations and intentions. In addition, consciousness is often considered essential to it.
Current AI systems are far from resembling human thought processes [14]. For example,
human thinking processes human-specific information from incoming stimuli (perception
and emotional responses) and internal mental representations (memory and acquired ex-
periences). At the same time, today’s most advanced AI systems fall short of human-like
understanding (see also Myth 2.7). They also most likely lack consciousness (see Myth 6.5).
This is not necessarily bad because the goal is usually not to make computers think like
humans. AI is not about creating human replicas. It is about leveraging technology to solve
complex challenges and enhance human potential.

Myth 3.10:
AI systems have agency.

Classification: controversial, misleading, progress-sensitive.


Discussion/Reality: Agency is the ability to control one’s actions and resulting conse-
quences. Media headlines often suggest that AI systems have this property, for example,
A robot wrote this entire article. Are you scared yet, human? [23]. In addition to being
sensationalist, such headlines hide the underlying human agency, in this case, that humans
prompted the text-generating model with the key elements for the article. In reality, no
matter how complex an AI system is, there is always some human agency: The owner and
associated data scientists are responsible for the use of AI in the first place, the decisions
made during development, and the deployment of the AI system. There tends to be a
blindness to such human contributions and decisions [19].

4 Myths - AI system design and operation


For businesses and organizations, unrealistic expectations about AI application develop-
ment are a major barrier. They can lead to misunderstandings in project teams, especially
between business managers and technical staff. However, technical and non-technical stake-
holders should be able to communicate effectively [32]. Therefore, this section clarifies
specific misconceptions that can affect AI project teams.

Myth 4.1:
AI systems are easy to build and anyone can do it.

Classification: factually incorrect, misleading.


Discussion/Reality: Building sophisticated AI systems requires a significant investment
of time, skill, and resources. It is not a solitary task. Rather, it requires a collaborative
effort of various experts, including data scientists, engineers, and domain specialists. This

11
has not changed even though AI tools and frameworks have become more accessible and
user-friendly: Some AI development tools now enable easy drag-and-drop style development.
They offer predefined workflows, models, and algorithms [16]. However, it is still necessary
to understand the underlying principles: Otherwise, it is hard to make informed decisions
and troubleshoot problems as they arise. Insufficient awareness and careless use may lead
to untrustworthy applications and even to ethical problems.
Challenges in building AI systems.
(1) Gathering and preprocessing vast amounts of data (quality refinement, labeling, etc.).
(2) Integrating AI systems with existing infrastructure and workflows.
(3) Balancing model accuracy with time and computational resource constraints.
(4) Ensuring robustness and reliability in real-world scenarios.
(5) Overcoming the black-box problem: interpretable and explainable outcomes.
(6) Adapting to rapidly evolving research and technology.
(7) Addressing ethical considerations (for example, biases, data privacy, societal impact).
(8) Navigating legal and regulatory frameworks surrounding AI.
(9) Acquiring and retaining specialized talent in the field.

Myth 4.2:
To improve an AI system, just ’throw’ more data at it.

Classification: misrepresentative, misleading.


Discussion/Reality: Data quantity is fundamental for training AI models. However, im-
proving overall performance requires a holistic approach that goes beyond simply increasing
the data volume. There are several other generic improvement strategies:
ˆ ensuring high data quality, for example, by making corrections or removing biases,
ˆ modifying the AI model architecture, for example, by choosing a different class of
artificial neural networks,
ˆ adjusting the methods used for fitting the model parameters to the data (training).
The first strategy avoids unreliable and untrustworthy results caused by noise and confusion
from low-quality training data. The last two strategies correspond to algorithm refinement.
The appropriate strategy or combination of strategies depends on the desired effect. The
following overview shows that not all improvements have to aim at better system accuracy
or efficiency.
Potential venues for improving AI systems.
(1) Enabling human-AI collaboration to leverage the strengths of both.
(2) Generalizing AI models via transfer learning (that is, building on prior systems).

12
Review: AI myths and misconceptions (Version: November 8, 2023)

(3) Implementing real-time adaptation to new situations.


(4) Improving resilience against malicious manipulation from the outside.
(5) Increasing security by addressing potential system vulnerabilities.
(6) Enhancing user experience and interface design.
(7) Improving transparency (for example, by using explainability techniques).
(8) Leveraging more or better domain knowledge.

Myth 4.3:
AI systems learn autonomously and without human programming.

Classification: misrepresentative, misleading, progress-sensitive.


Discussion/Reality: This myth embodies the misconception that AI systems somehow
learn by themselves if only their goal is specified. In reality, training AI systems typically
involves humans. They select and prepare data, AI model architectures, and training algo-
rithms. This underscores the general importance of human programming: Dedicated human
effort puts machine-learning algorithms in the right place where they can be most effective
in extracting patterns and insights from data.
It is possible to automate certain aspects of the learning process. For example, AutoML
techniques are intended to ease AI development by providing predefined workflows, models,
and low-code interfaces [16]. However, most current automated learning has fundamental
limitations, including the inability of AI systems to make major changes to their own
architectures. In fact, AI systems are often frozen after an initial training period. If they
are updated, then a human intervention is the most likely cause (see Myth 4.4).

Myth 4.4:
AI systems automatically improve over time.

Classification: misrepresentative, misleading, progress-sensitive.


Discussion/Reality: Most AI systems do not automatically improve over time. The rea-
sons are similar to those already given in the clarification of Myth 4.3: Any improvement
or update to an AI system usually requires active human involvement. Humans may be
responsible for retraining the AI model, updating algorithms and architectures, or modify-
ing the data used for training. In principle, however, automation is possible. For example,
automated retraining could be triggered when performance is degrading. In addition, there
are special learning paradigms aimed at continuous learning: Online learning updates AI
models based on a continuous stream of new data [18]. Reinforcement learning is used to
adjust the behavior of AI agents based on reward signals for their past actions [37]. How-
ever, all of the aforementioned continuous learning methods are not trivial to implement
robustly. As a result, deliberate human intervention is likely to remain important for many
AI systems, especially when it comes to monitoring them during their operation.

13
Myth 4.5:
AI systems operate without human intervention.

Classification: misrepresentative, misleading, progress-sensitive.

Discussion/Reality: During its operational phase, an AI system usually still requires


human oversight and maintenance to ensure that it keeps working as intended. Humans
can play a central role in correcting errors, handling unforeseen circumstances, and updating
the AI system (see Myth 4.4). Humans can also review critical decisions in areas such as
healthcare, law enforcement, or hiring. In fact, many AI systems are designed to enhance
and support human decision making, rather than replace it.

5 Myths - more imminent impact of AI technology

AI technology has an enormous transformative potential for society and humanity as a


whole. Unsurprisingly, its impact is intensively debated. This section discusses some of the
major controversies and misconceptions in this debate.

Myth 5.1:
AI will only affect routine and manual jobs.

Classification: factually incorrect, misleading, progress-sensitive.

Discussion/Reality: It is true that AI technology has the potential to automate repetitive


and simple tasks traditionally performed by humans. However, it reaches far beyond simple
automation: AI is increasingly being integrated into complex decision-making processes,
data analysis, and customer service. In addition, creative work is undergoing a transforma-
tion as generative AI is getting better at creating music, art, and literature [1; 7; 10]. This
demonstrates that AI is not only impacting routine and manual jobs (see Myth 5.2 for a
continued discussion).

Examples of what AI can do beyond repetitive and manual tasks.


(1) data analysis. Analyze, predict, discover, generate insights, monitor in real time.
(2) personalization. Recommend (content, products, entertainment, etc.), give advice
(financial, educational, goal-setting, etc.), support personalized healthcare.
(3) automation. Improve complex processes, increase productivity.
(4) creative endeavors. Generate (text, code, images, video, art), improve search and
access to information.
(5) decision support. Recommend, assist in decision making and strategizing.
(6) innovation. Solve complex problems, discover, design.

14
Review: AI myths and misconceptions (Version: November 8, 2023)

Myth 5.2:
AI will take away our jobs and replace humans.

Classification: controversial, progress-sensitive.


Discussion/Reality: It is often claimed that by eliminating undesirable work, AI will free
up time for more fulfilling tasks. However, many workers do not share this expectation.
Fears of job loss are real and AI technology appears to be taking these fears to a new
level. AI technology is likely to affect all jobs and industries in the long run, especially
when it overcomes its current limitations (see the examples below Myth 3.3). This does
not mean that all jobs will disappear. At present, it seems unlikely that AI will replace
all humans in the labor market. However, it is clear that AI is challenging us to develop
new skills and adapt to new ways of working. If we accept the challenge, we can enhance
our capabilities and productivity at both the individual and societal levels [2]. We can
even expect AI technology to create new jobs and roles (for example, prompt design for
generative AI systems).

Myth 5.3:
AI technology makes us stupid.

Classification: controversial.
Discussion/Reality: This myth portrays a dystopian scenario in which machines take
over our daily mental tasks. Our brains are rendered thoughtless. Inherent knowledge
becomes obsolete by instant access through touch-screen interfaces. The essence of human
wisdom fades away and society gradually deteriorates in all aspects: physically, mentally,
and spiritually. It becomes an idiocracy [28]. In a contrasting utopian scenario, AI frees up
our time and mental energy by automating simple tasks. We keep the routines that we like.
Beyond, we are free to choose how much of our time we spend on creative and intellectual
endeavors. Thanks to AI technology, we can efficiently interact with existing information
and stimulate our imagination with AI-generated content. In this scenario, AI effectively
enhances our intelligence.
Our future trajectory is likely to lie somewhere between these two scenarios. We can
influence it by recognizing that we, as individuals, have choices regarding our use of AI
and other modern technologies. It is a question of our mindset whether we are in danger
of becoming too dependent on technology and losing our critical minds. Education has an
important role to play in this regard [8; 28; 41]. After all, the impact of AI technology
depends on how we integrate it into our lives.

Myth 5.4:
AI destroys our privacy.

Classification: controversial, misrepresentative.


Discussion/Reality: The traditional concept of privacy is that of physical privacy (the
right to be left alone). Current AI-related privacy concerns go beyond this traditional

15
notion. They are fueled by the ability of AI to mine the vast amounts of user-generated data
from digital services. Specialized AI algorithms can make increasingly accurate predictions
about individuals. These predictions are monetizable because they allow, for example,
targeted advertising and product recommendations. More and better data can improve
the underlying AI algorithms. Therefore, data is now driving entire business models for
companies like Google and Facebook. These companies offer their services seemingly for
free, but behind the scenes they collect data. Users pay for ”free” digital services with their
data, often with questionable consent. The user essentially becomes the product [27].
The use of personal data concerns informational privacy. However, by manipulating choices
and behavior, AI algorithms can also cause violations of decisional privacy (the right to make
free choices) and behavioral privacy (the right to act as one wishes). Overall, AI technology
clearly has the potential to interfere with individual privacy rights. Therefore, we should
establish reasonable boundaries (see Myth 5.5) [2]. Meanwhile, AI approaches are already
used to improve network security, where systems adapt to attacks and malware. This shows
that ultimately, the impact of AI on privacy is not inherently bad or good - it depends on
where and how AI is used (see Myth 3.7).

Myth 5.5:
AI cannot or should not be regulated.

Classification: controversial, progress-sensitive.


Discussion/Reality: Despite the potential benefits of AI, it can affect us in unintended
and poorly understood ways. Major risks associated with AI stem from malicious uses (for
example, bioterrorism, misinformation, mass surveillance), military and corporate AI races,
and AI agents that autonomously pursue dangerous goals [17]. While there is agreement
that such risks need to be addressed, the potential role of regulation is controversial.
Some may wonder whether a rather opaque and rapidly evolving technology like AI can be
regulated at all. Indeed, the underlying code of an AI system may be difficult to scrutinize.
However, the impact on society also depends on the users and controllers of an AI system,
their intentions, and the way how people are affected. These factors all lend themselves as
targets for regulation [24].
Next, some may argue that AI should not be regulated, fearing that regulation will stifle
innovation and take away competitive edge. An important underlying misconception here
is that AI technology is always good (see Myth 3.7). Technology should not be devel-
oped without constraints if it can undermine human rights (for example, microtargeting of
voters and facial recognition) or be otherwise dangerous (for example, lethal autonomous
weapons [17; 35]). Because not all technological progress is desirable, reasonable regulation
can help to prevent harmful outcomes while fostering beneficial progress.
Complementary to regulation: General strategies to mitigate AI risks.
(1) Promoting AI literacy among the general public and policy makers: fostering awareness
about AI risks and critical skills for assessing AI systems [25].

16
Review: AI myths and misconceptions (Version: November 8, 2023)

(2) Conducting risk assessments, and regularly update them as technology evolves.
(3) Coordinating and forming agreement among a large base of international stakeholders
on common AI safety standards (ensures uniform impact on the competitive landscape).
(4) Establishing ethical guidelines and codes of conducts (will not be enough alone [23]).
(5) Developing AI systems with built-in safety measures [17].
(6) Establishing mechanisms for continuous monitoring of safety-relevant AI systems (for
example, audits and feedback loops).

6 Outlook - speculations about the future of AI

This collection of myths would not be complete without a discussion of the highly speculative
and controversial claims about the future of AI. Expert opinions on these matters differ
widely. Clearly, no one knows what the future of AI will actually look like. Therefore,
long-term predictions should always be taken with some skepticism. It also means that,
for now, we need to keep an open mind and consider all possibilities. The purpose of this
section is therefore to introduce some interesting points and perspectives that may serve as
starting points for further exploration.
To prepare the discussions, let us define some terms. First, general intelligence describes
the ability to achieve virtually any goal (including learning). Artificial general intelligence
(AGI) refers to the ability of a non-biological system to accomplish any cognitive task at
least as well as humans (non-biological general intelligence). We use the term artificial
super intelligence (ASI) to refer to general intelligence that is far beyond human level.

Myth 6.1:
Artificial general intelligence/superintelligence is coming soon.

Classification: controversial, progress-sensitive.


Discussion/Reality: Let us first discuss whether it is possible to build artificial general in-
telligence (AGI) or superintelligence (ASI) at all. From a physical point of view, intelligence
is just a special kind of information processing, which in turn is just moving elementary
particles around [38]. The laws of physics allow much more than we can do with technology
today. Therefore, it is probably possible to build highly intelligent machines. After all, the
existence of humans proves that general intelligence can be designed. It is another question
whether we will do it.
It is difficult to predict whether AI systems will ever replicate the full spectrum of human
intelligence. This spectrum does not only encompass cognitive abilities, but also emotional
intelligence and other unique qualities. AI has already demonstrated superhuman abilities in
specific task domains, such as image and speech recognition, language comprehension, and
various computer games [20]. The list keeps growing. There is also a trend toward general-
purpose AI systems. For example, OpenAI’s GPT-4 shows exceptional language mastery

17
that allows it to tackle complicated challenges in various domains such as mathematics,
coding, vision, medicine, law, and even psychology. Given the breadth and depth of these
capabilities, some have concluded that we already have an early AGI version, albeit an
incomplete one [7].

This brings us to the question of the time scale. Here, once AGI is reached, it may only be
a brief moment to also reach ASI, especially if an intelligence explosion should take place
(see Myth 6.2). AI researchers vary widely in their estimates of when we will have the
first AGI/ASI systems. Some say that it will take only a few decades or even years, others
estimate centuries, and still others believe it will never happen. Time estimates always need
to be treated with caution. The problem with them is that there may be several peaks to
climb, but from where we are, we can only see the next one clearly.

Myth 6.2:
An intelligence explosion will cause a technological singularity.

Classification: controversial, progress-sensitive.

Discussion/Reality: This myth contains two concepts: First, a technological singularity


describes a hypothetical future point in time when technological growth becomes uncontrol-
lable and irreversible, leading to unpredictable changes in human civilization. Second, an
intelligence explosion describes the hypothetical phenomenon of recursive self-improvement
of an AI system that rapidly leads to artificial superintelligence (ASI) and thereby to a
technological singularity [15]. For the discussion, we examine two factors that determine
the speed of intelligence improvements: the first is optimization power (how much effort
is applied to improve a system’s intelligence?), and the second is called recalcitrance (how
difficult is it to improve the system?) [5].

First, optimization power will likely keep increasing. This is supported by several trends, in-
cluding improving hardware, the increasing availability of data that AI systems can absorb,
and a growing number of talented people that seek to improve AI. Applied optimization
power will also likely be high during a presumable transition to ASI. Initially, this could
be because humans try harder to improve a promising AI system. Later, if an AI sys-
tem should eventually become capable of designing further improvements itself, effort and
progress might accelerate to digital speeds [5].

The counterforce to optimization power is recalcitrance, that is, resistance and barriers to
progress. Some arguments point to a reduction in such barriers: For example, efforts to
improve general-purpose systems may be streamlined as people work less on task-specific
systems [5]. But there are also strong counter arguments, such as the need for testing [3].
Testing is necessary whenever a change is made to a system, otherwise one cannot be sure
that previously existing capabilities have not been lost: Without proper testing, a system
cannot be trusted. Next, it is not clear whether future dynamics will encourage project
teams to build AI systems with extensive self-improvement capabilities at all. In any case,
the possibility of an intelligence explosion will remain a live topic of debate.

18
Review: AI myths and misconceptions (Version: November 8, 2023)

Myth 6.3:
Artificial superintelligence will form a singleton (world government).

Classification: controversial, progress-sensitive.


Discussion/Reality: The formation of a singleton is another hypothetical scenario in
which a single superintelligent AI entity gains unprecedented control over the world. Pro-
ponents argue that a singleton might arise due to the advantage of unified and better
decision making. It could avoid conflicts between competing human and AI entities.
A singleton government would be a continuation of the historical trend towards greater
coordination over increasingly large distances [38]. This trend is mainly driven by the po-
tential mutual benefits from exchanging goods. The magnitude of these benefits co-evolves
with advancements in both transportation and communication technologies, where the lat-
ter makes the necessary coordination more efficient. For example, the formation of multi-
cellular organisms relied on cell signaling to neighbors, animals could develop thanks to their
circulatory and nervous systems, and human tribes and empires were facilitated by human
language and other innovations. Globalization is merely the most recent manifestation in
this continued trend of hierarchical expansion [38]. As transportation and communication
technology will likely keep improving, so will the trend for increased coordination.
There is, however, an opposing force that favors decentralization: the inefficiency of coordi-
nating over large distances. One reason is that large distances generate latencies in spreading
information. For example, an earth-sized AI brain could have truly global ”thoughts” only
about as fast as a human brain, despite of its digital processing power [38].
In summary, the idea of a forming singleton remains speculative. Nevertheless, we should
try to steer its potential arrival, making sure that it would be beneficial for humanity. This
is a hard problem that depends as much on our preferences as it does on many societal,
ethical, and technical factors that have yet to unfold.

Myth 6.4:
Artificial superintelligence will cause human extinction.

Classification: controversial, progress-sensitive.


Discussion/Reality: Some scientists have expressed concern that artificial superintelli-
gence (ASI) could lead to human extinction. This could be because an ASI might perceive
us as a burden on resources, disapprove of our reckless management of the planet, or see us
as a threat to achieving its goal. For example, it may expect too much human resistance to
its actions, or it may fear our potential use of hydrogen bombs.
Ultimately, the actions of an ASI would depend on its goals and motivations. Here, even
if we manage to equip an ASI with initial goals that seem aligned to ours, things could
go wrong [5]: For example, an ASI may find a way to satisfy the criteria of its goal that
violates the intentions of the programmers (perverse instantiation). An ASI could also
accidentally transform large parts of the reachable universe into infrastructure in service of
its goal (infrastructure profusion), including the conveniently allocated atoms of humans.

19
The arrival of an AI powerful enough to make the previous considerations real may be a
long way off (see Myth 6.1). However, if there is even a small chance that there will be
such powerful AI, we would do well to prepare for it and influence the outcome in a positive
direction [5; 17; 29; 38]. On the other hand, while we cannot neglect the existential threats
posed by ASI, we should not exaggerate them either. Much effort should also be devoted
to current challenges that do not presuppose ASI. This includes the prevention of malicious
AI use, AI races, and safe design of autonomous AI agents (see Myth 5.5).

Myth 6.5:
AI will become conscious.

Classification: controversial, progress-sensitive.


Discussion/Reality: Consciousness is still a poorly understood phenomenon with many
competing definitions, including sentience, wakefulness, and self-awareness. A broad defini-
tion is as subjective experience, which is not limited to existing biological consciousnesses.
Consciousness gives meaning to an otherwise dark and unaware universe: Conscious life
can appreciate what is happening. But do we want conscious non-biological intelligences?
Some people prefer unconscious machines because it frees them from moral considerations.
It is also not clear whether consciousness would add anything to the competencies that we
expect from our AI tools and helpers. Therefore, some argue that creating conscious AI
agents should be generally avoided. On the other hand, if at some point it should become
possible to upload human minds into machines (see Myth 6.6), people may desire these
machines to be conscious in order to preserve their subjective experience.
Whether we prefer conscious machines or not, we need to understand the conditions un-
der which consciousness might emerge in AI systems. More generally, it is largely unclear
how matter can become conscious. One theory is that consciousness may be an emergent
phenomenon of matter: It depends not only on the particles, but rather on their configu-
ration [38]. This would be analogous to wetness, which is an emergent property of matter
when its molecules are arranged so that it enters liquid state. Overall, however, we do not
yet have much empirical evidence. Consciousness remains largely a mystery, so we need fur-
ther scientific and philosophical exploration. For example, traditional measures of human
consciousness (behavioral or neural correlates of consciousness [39]) do not generalize well
to machines. This makes new methods for measuring consciousness necessary.

Myth 6.6:
AI will make us immortal.

Classification: controversial, progress-sensitive.


Discussion/Reality: There are two possible paths to immortality: a biological and a dig-
ital one. On the biological side, AI could catalyze and drive advances in medical research,
including personalized treatment. It may become possible to repair or replace malfunction-
ing cells, tissues, and organs. This could be facilitated, for example, by nano-sized robots

20
Review: AI myths and misconceptions (Version: November 8, 2023)

that perform precise tasks at the cellular or molecular level [13]. In addition, a better under-
standing of the aging process could lead to treatments that slow down or reverse age-related
decline. Finally, advances in biotechnology and genetic engineering could lead to biological
enhancements that significantly improve human health and longevity.
The second possibility is digital immortality. It could be achieved by transforming someone’s
mind (consciousness, memories, etc.) into a digital form. This process is called mind
uploading. It would create a digital replica, thus granting theoretical immortality in a digital
state. Mind uploading should be feasible because minds are essentially specific arrangements
of atoms and neurons that can be computed and therefore simulated. However, there
are some essential technologies that have yet to be developed [5]: First, brains must be
reliably scanned, then the brain structure has to be reconstructed from the scans, and
finally, the mind has to be implemented as a simulation on a sufficiently powerful computer
(brain emulation). Taken together, it appears to be a long way to achieve mind uploading.
Powerful AI systems could help solve some of the remaining challenges.
The prospect of immortality is a complex philosophical and ethical challenge. In addition
to overcoming biological limitations, it raises existential questions that must be carefully
addressed. These include ethical concerns regarding the impact of immortality on overpop-
ulation, resource scarcity, and societal dynamics.

7 Conclusion
According to the historian Arthur Schlesinger, science and technology revolutionize our
lives, but memory, tradition, and myth frame our response. Indeed, it is quite clear that
AI technology will be increasingly integrated into our lives. The how of this integration
depends largely on our perception of AI technology, that is, on individual and cultural
backgrounds, myths, and popular conceptions and misconceptions. All these elements act
as anchoring points in our space of ideas [29]. Given the significant influence of these factors
on our future with AI, it seems reasonable to try to put them on a realistic footing.
I, like many others [4; 25; 28; 41], think that education has an important role to play in
this regard. Education can help to promote AI literacy and awareness, reduce anxiety,
and address misconceptions. This collection and discussion of AI myths is written with
these goals in mind. I was strongly motivated by the prospect of contributing to a more
constructive and inclusive dialogue between different stakeholders, including researchers,
policy makers, industry experts, ethicists, and community representatives.
Despite all the promises that AI technology holds for society, we must remain aware of its
subtle but significant costs. These include resource consumption and the fact that some
people may feel left behind (see Myth 5.2). More generally, it means that we need to devote
resources to solving current challenges, such as preventing AI misuse (see Myth 5.5). It
also means that we need to take a long-term view and look for global solutions. The race
is on for better AI technology and artificial general intelligence. At this point, no one can
be sure where it will take us. Nevertheless, we should work together to figure out what we
want the AI transformation to look like. It will give us more control over the outcome. So
let’s keep talking and shape the AI transformation for the best.

21
References

[1] Andrea Agostinelli, Timo I Denk, Zalán Borsos, Jesse Engel, Mauro Verzetti, Antoine
Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, Matt Shar-
ifi, Neil Zeghidour, and Christian Frank. Musiclm: Generating music from text. arXiv
preprint arXiv:2301.11325, 2023. (Cited on pages 9 and 14.)

[2] Robert D Atkinson. ’It’s going to kill us!’ and other myths about the future of
artificial intelligence. Information Technology & Innovation Foundation, 2016. (Cited
on pages 2, 8, 15, and 16.)

[3] Peter Bentley. The three laws of artificial intelligence: Dispelling common myths.
Should we fear artificial intelligence, pages 6–12, 2018. (Cited on pages 2 and 18.)

[4] Arne Bewersdorff, Xiaoming Zhai, Jessica Roberts, and Claudia Nerdel. Myths, mis-
and preconceptions of artificial intelligence: A review of the literature. Computers and
Education: Artificial Intelligence, page 100143, 2023. (Cited on pages 2 and 21.)

[5] Nick Bostrom. Superintelligence. Dunod, 2017. (Cited on pages 18, 19, 20, and 21.)

[6] Meredith Broussard. Artificial unintelligence: How computers misunderstand the


world. MIT Press, 2018. (Cited on page 8.)

[7] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric
Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al.
Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv preprint
arXiv:2303.12712, 2023. (Cited on pages 9, 14, and 18.)

[8] Nicholas Carr. Is Google making us stupid? Teachers College Record, 110(14):89–94,
2008. (Cited on page 15.)

[9] François Chollet. On the measure of intelligence. arXiv preprint arXiv:1911.01547,


2019. (Cited on page 27.)

[10] Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, and Mubarak Shah. Diffu-
sion models in vision: A survey. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 2023. (Cited on pages 9 and 14.)

[11] Constance de Saint Laurent. In defence of machine learning: debunking the myths
of artificial intelligence. Europe’s journal of psychology, 14(4):734, 2018. (Cited on
page 2.)

[12] Ian J Deary. Intelligence: A very short introduction, volume 39. Oxford University
Press, USA, 2020. (Cited on page 27.)

[13] Eric Drexler. Engines of creation: the coming era of nanotechnology. Anchor, 1987.
(Cited on page 21.)

22
Review: AI myths and misconceptions (Version: November 8, 2023)

[14] Frank Emmert-Streib, Olli Yli-Harja, and Matthias Dehmer. Artificial intelligence:
A clarification of misconceptions, myths and desired status. Frontiers in artificial
intelligence, 3:524339, 2020. (Cited on pages 1, 2, 10, and 11.)

[15] Irving John Good. Speculations concerning the first ultraintelligent machine. In Ad-
vances in computers, volume 6, pages 31–88. Elsevier, 1966. (Cited on page 18.)

[16] Xin He, Kaiyong Zhao, and Xiaowen Chu. AutoML: A survey of the state-of-the-art.
Knowledge-Based Systems, 212:106622, 2021. (Cited on pages 12 and 13.)

[17] Dan Hendrycks, Mantas Mazeika, and Thomas Woodside. An Overview of Catastrophic
AI Risks. arXiv preprint arXiv:2306.12001, 2023. (Cited on pages 10, 16, 17, and 20.)

[18] Steven CH Hoi, Doyen Sahoo, Jing Lu, and Peilin Zhao. Online learning: A compre-
hensive survey. Neurocomputing, 459:249–289, 2021. (Cited on page 13.)

[19] Deborah G Johnson and Mario Verdicchio. Reframing AI discourse. Minds and Ma-
chines, 27:575–590, 2017. (Cited on page 11.)

[20] Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan
Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, et al. Dyn-
abench: Rethinking benchmarking in NLP. arXiv preprint arXiv:2104.14337, 2021.
(Cited on page 17.)

[21] Seo Young Kim, Bernd H Schmitt, and Nadia M Thalmann. Eliza in the uncanny
valley: Anthropomorphizing consumer robots increases their perceived warmth but
decreases liking. Marketing letters, 30:1–12, 2019. (Cited on page 10.)

[22] Shane Legg and Marcus Hutter. Universal intelligence: A definition of machine intel-
ligence. Minds and machines, 17:391–444, 2007. (Cited on page 27.)

[23] Daniel Leufer. Why we need to bust some myths about AI. Patterns, 1(7):100124,
2020. (Cited on pages 1, 2, 11, and 17.)

[24] Daniel Leufer, Alexandra Steinbrück, Zuzana Liptakova, Kathryn Mueller, and Rachel
Jang. AI Myths. https://www.aimyths.org/, 2020. Accessed: 2023-07-01. (Cited on
page 16.)

[25] Duri Long and Brian Magerko. What is AI literacy? Competencies and design consid-
erations. In Proceedings of the 2020 CHI conference on human factors in computing
systems, pages 1–16, 2020. (Cited on pages 16 and 21.)

[26] Carl Macrae. Learning from the failure of autonomous and intelligent systems: Acci-
dents, safety, and sociotechnical sources of risk. Risk analysis, 42(9):1999–2025, 2022.
(Cited on page 10.)

[27] Karl Manheim and Lyric Kaplan. Artificial intelligence: Risks to privacy and democ-
racy. Yale JL & Tech., 21:106, 2019. (Cited on page 16.)

23
[28] Janusz Morbitzer. Into Idiocracy–pedagogical reflection on the epidemic of stupidity
in the generation of the internet era. Zeszyty Naukowe Wyższej Szkoly Humanitas.
Pedagogika, (17):125–137, 2018. (Cited on pages 15 and 21.)

[29] Roberto Musa Giuliano. Echoes of myth and magic in the language of artificial intel-
ligence. AI & society, 35(4):1009–1024, 2020. (Cited on pages 2, 20, and 21.)

[30] Ulric Neisser, Gwyneth Boodoo, Thomas J Bouchard Jr, A Wade Boykin, Nathan
Brody, Stephen J Ceci, Diane F Halpern, John C Loehlin, Robert Perloff, Robert J
Sternberg, and Susana Urbina. Intelligence: knowns and unknowns. American psy-
chologist, 51(2):77, 1996. (Cited on page 27.)

[31] Frank G Nussbaum. A comprehensive review of ai myths and misconceptions. 2023.


doi: 10.13140/RG.2.2.28098.15049. (Cited on pages 1 and 2.)

[32] Frank G Nussbaum. Successful communication of complex information. Re-


searchgate Preprint, 2023. doi: 10.13140/RG.2.2.21088.87048/1. URL https:
//www.researchgate.net/publication/370751075_Successful_Communication_
of_Complex_Information. (Cited on pages 2 and 11.)

[33] Inioluwa Deborah Raji, Timnit Gebru, Margaret Mitchell, Joy Buolamwini, Joonseok
Lee, and Emily Denton. Saving face: Investigating the ethical concerns of facial recog-
nition auditing. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and
Society, pages 145–151, 2020. (Cited on page 10.)

[34] Ulrike Reisach. The responsibility of social media in times of societal and political
manipulation. European Journal of Operational Research, 291(3):906–917, 2021. (Cited
on page 10.)

[35] Jonathan Roberge, Marius Senneville, and Kevin Morin. How to translate artificial
intelligence? Myths and justifications in public discourse. Big Data & Society, 7(1):
2053951720919968, 2020. (Cited on pages 2 and 16.)

[36] Robert J Sternberg. Handbook of human intelligence. Cambridge University Press,


1982. (Cited on pages 1 and 27.)

[37] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT
press, 2018. (Cited on pages 10 and 13.)

[38] Max Tegmark. Life 3.0: Being human in the age of artificial intelligence. Vintage,
2018. (Cited on pages 1, 2, 17, 19, 20, and 26.)

[39] Giulio Tononi and Christof Koch. Consciousness: here, there and everywhere? Philo-
sophical Transactions of the Royal Society B: Biological Sciences, 370(1668):20140167,
2015. (Cited on page 20.)

[40] Anthony Zador, Blake Richards, Bence Ölveczky, Sean Escola, Yoshua Bengio,
Kwabena Boahen, Matthew Botvinick, Dmitri Chklovskii, Anne Churchland, Claudia
Clopath, et al. Toward next-generation artificial intelligence: catalyzing the NeuroAI
revolution. arXiv preprint arXiv:2210.08340, 2022. (Cited on page 10.)

24
Review: AI myths and misconceptions (Version: November 8, 2023)

[41] Brahim Zarouali, Natali Helberger, and Claes H De Vreese. Investigating algorith-
mic misconceptions in a media context: Source of a new digital divide? Media and
Communication, 9(4):134–144, 2021. (Cited on pages 15 and 21.)

8 Glossary of terms and how they are used

AI agent. AI systems that pursue goals more or less autonomously by determining and
performing subtasks and actions on their own.
AI effect. The phenomenon that once a task is successfully performed using AI technology,
it is often no longer referred to as AI (see Myth 2.2).
AI model. A specific mathematical function that computes predictions/outputs from given
data/inputs. Different AI models can have the same architecture and differ just in the values
of their model parameters.
Algorithm. Step-by-step procedures for solving problems or performing computations (see
Myth 2.6).
Anthropomorphism. The attribution of human-like characteristics and behaviors to
machines/AI (more generally, to all kinds of non-human objects and entities).
(Model) architecture. The structure of the mathematical function that underlies an AI
model.
Artificial intelligence (AI). Used in different contexts (see the Introduction): refers to
a field of research, but also to a machine capability (non-biological intelligence).
Artificial general intelligence (AGI). The ability to accomplish any cognitive task at
least as well as humans (see Myth 6.1).
Artificial neuronal network. A model architecture consisting of artificial neurons that
use their inputs (from incoming connections) to compute outputs (for outgoing connections).
Artificial superintelligence (ASI). General intelligence far beyond human level (see
Myth 6.1).
Black-box problem. Refers to the inscrutability of AI models, which can cause trust
issues and outcomes that are not interpretable (see also explainable AI).
Complex information processing. An alternative name for the field of AI.
Consciousness. Having subjective experience (see Myth 6.5).
Deep Learning. A subset of machine learning that encompasses algorithms that im-
plement large artificial neural networks networks and adapt them by learning from vast
amounts of data (see Myth 2.1).
Evaluation. The process of testing how well an AI model works. Synonyms: testing.
Explainable AI. A property of an AI system that allows humans to intellectually verify
outcomes/predictions (also refers to the methods to achieve this).

25
Foundational model. A general-purpose AI model that can perform much more than
just a single specific task.
General intelligence. The ability to achieve virtually any goal, including the ability to
learn [38].
Generative AI. A class of AI techniques aimed at creating novel content, for example,
images, videos, and text.
Labeling. A step that is sometimes necessary during the preparation of training data.
Synonyms: annotating.
Intelligence. Goal-directed adaptive behavior (but alternative definitions exist, see Sec-
tion 9).
Intelligence explosion. Recursive self-improvement of an AI system that rapidly leading
to artificial superintelligence. Implies a technological singularity (see Myth 6.2).
Machine learning. A subset of AI technology that encompasses algorithms that learn
from data/experience, enabling predictions and data-driven decisions (see Myth 2.1).
Mind uploading. The process of transforming someone’s mind (consciousness, memories,
etc.) into a digital form (see Myth 6.6).
Moore’s law. The observation that computing power (more specifically, the number of
transistors in an integrated circuit) doubles about every two years.
Narrow AI. A task-specialized AI system with limited capabilities.
Large language models. A relatively new class of foundational AI models that are very
capable at translating between languages. They power many modern AI chatbots.
Learning. See training.
(AI model) parameters. The free variables of an AI model whose values are adapted to
the training data during the learning phase.
Reinforcement learning. A learning paradigm in which AI agents adjust their behavior
based on reward signals which they receive for their actions.
Representation. A fundamental concept in AI that involves creating models and struc-
tures to represent information and knowledge such that intelligent systems can use it.
(Classic) statistics. The branch of mathematics that deals with the collection, analysis,
and interpretation of data.
(Technological) Singularity. A hypothetical future point in time when technological
growth becomes uncontrollable and irreversible, leading to unpredictable changes in human
civilization (see Myth 6.2).
(Human) thinking. A term to summarize human cognitive activities such as reasoning,
problem solving, and decision making (see Myth 3.9).
Training. The process of adapting a model (for example, the parameters of a neural
network architecture) to the training data. Synonyms: learning, teaching, optimization.
Training data. The data that an AI system learns from (in a machine-learning setting).
Transfer learning. A method to reduce the training effort for a new task. It reuses the
model parameters from an existing AI model for a different (but similar) task.

26
Review: AI myths and misconceptions (Version: November 8, 2023)

9 Appendix: Myths about human intelligence

There is no universally accepted definition of intelligence because it is a complex and mul-


tifaceted trait. Selected attempts to define intelligence include as goal-directed adaptive
behavior [36], an agent’s ability to achieve goals in a wide range of environments [22], and
the ability to understand complex ideas, to learn from experience, to engage in various
forms of reasoning, to overcome obstacles by taking thought [30].

Myth 9.1:
Human intelligence only resides in the brain.

Classification: controversial.
Discussion/Reality: Human intelligence is often associated with the brain and cognitive
abilities, such as learning, reasoning, and problem solving. However, human intelligence is
also influenced by genetics and environmental factors (see Myth 9.2). Intelligence can also
be understood as a joint function of the mind (brain), sensorimotor modalities (perceptual
and motoric abilities), and environment (constraints on the space of possible actions) [9].

Myth 9.2:
Human intelligence is solely determined by genetics and thus immutable.

Classification: factually incorrect.


Discussion/Reality: Intelligence is not solely determined by genetics, but also influenced
by environmental factors such as nutrition, upbringing, education, and various life experi-
ences. It has been shown that individuals who have been exposed to intellectually stimu-
lating environments tend to develop higher cognitive abilities than those who have not [12].
Intelligence can be influenced and enhanced throughout a person’s life.

Myth 9.3:
A high intelligence quotient (IQ) guarantees success in life.

Classification: factually incorrect, misleading.


Discussion/Reality: On an individual level, success can have different meanings, such as
making a difference, feeling a sense of progress, or more generally achieving personal goals
(including non-materialistic ones). In none of these spheres can a high IQ score be considered
the sole ingredient for success. There are always other factors involved, including emotional
intelligence, motivation, and external factors such as access to resources, opportunities, and
support networks. IQ tests fail to measure most of these factors because they only focus on
specific cognitive abilities such as logical reasoning and pattern recognition.

27

You might also like