Analysis of Implication of Artificial Intelligence (Ai) in Robotics

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 84

ANALYSIS OF IMPLICATION OF

ARTIFICIAL INTELLIGENCE (AI) IN ROBOTICS

By
AFIFA NOOR (Roll no 1685)
2019-GCUF-05090

Thesis submitted in partial fulfillment of


the requirements for the degree of

MASTERS OF SCIENCE
IN
COMPUTER SCIENCE

DEPARTMENT OF COMPUTER SCIENCE


GOVERNMENT COLLEGE UNIVERSITY, FAISALABAD

August 2021
DECLARATION
The work reported in this thesis was carried out by me under the supervision of Dr.
RamzanTalib Department of Computer Science GC University, Faisalabad, Pakistan.
I hereby declare that the title of thesis ANALYSIS OF IMPLICATION OF
ARTIFICIAL INTELLIGENCE (AI) IN ROBOTICS and the contents of thesis are
the product of my own research and no part has been copied from any published source
(except the references, standard mathematical or genetic models /equations /formulas
/protocols etc). I further declare that this work has not been submitted for award of any
other degree /diploma. The University may take action if the information provided is
found inaccurate at any stage.

Signature of the Student/Scholar


Name: Afifa Noor
Registration No: 2019-GCUF-05090

ii
CERTIFICATE BY SUPERVISORY COMMITTEE
We certify that the contents and form of thesis submitted by Miss. Afifa Noor
Registration No 2019-GCUF-05090 has been found satisfactory and in accordance with
the prescribed format. We recommend it to be processed for the evaluation by the
External Examiner for the award of degree.
Signature of Supervisor ………………….
Name: ……………………………………..
Designation with
Stamp……………………….
Co-Supervisor (if any)
Signature ………………………………….
Name: ……………………………………..
Designation with
Stamp……………………….
Member of Supervisory Committee
Signature ………………………………….
Name: ……………………………………..
Designation with
Stamp……………………….
Member of Supervisory Committee
Signature ………………………………….
Name: ……………………………………..
Designation with
Stamp……………………….
Chairperson
Signature with
Stamp……………………………
Dean / Academic Coordinator
Signature with
Stamp……………………………

iii
DEDICATED

TO
My Graceful and Polite PARENTS

&
All Family Members
Who live in my mind and soul
Whose love is more precious
Than pearls and diamonds
Who are those whom I say my own
Whose love will never change
Whose prayers will never die

iv
ACKNOWLEDGEMENTS
I am very thankful to ALLAH ALMIGHTY, The Most Beneficent and The Most
Merciful. I am thankful to the heart of the ALLAH’s last Messenger HAZRAT
MUHAMMAD (S.A.W) that sent Him as a mercy to the worlds and sources of
knowledge of the people.
I feel proud to express my gratitude to my honest, sincere and hardworking supervisor
Dr. Ramzan Talib, Professor of computer science department of Government College
University Faisalabad. He used his intelligence and skills in a perfect way to enable me to
write thesis in a meaningful way. He always guides me in a sympathetic manner
throughout the research work.
I feel highly privileged to take this opportunity to express my heartiest gratitude and deep
sense of indebt to my worthy supervisory committee, Dr. Ramzan Talib and Nafees
Ayuband Sir Sheikh Muhammad AamirDepartment of Computer Science GC
University, Faisalabad under whose kind and scholastic guidance, keen interest and
constant encouragement.
Words are very important to convey thoughts and thanks, the words are impossible to find
to thank our Father and whole family for their prayers and encouragement for us and for
our work.
Finally, I apologize if I have caused anger of offence to any body and the errors that
remain in the manuscript are mined alone.
.

Afifa Noor

v
Table of Contents
CHAPTER1..........................................................................................................................1

INTRODUCTION................................................................................................................1

1.1 Robotics......................................................................................................................1

1.2 Artificial intelligence and machine learning..............................................................2

1.3 Automation.................................................................................................................2

1.4 Disentangling artificial intelligence, robotics, and automation..................................3

1.5 Distinction from information and communication technology..................................3

1.6 Future research directions for organizational scholars:..............................................4

Chapter 2............................................................................................................................12

Review of Literature..........................................................................................................12

CHAPTER3........................................................................................................................44

METHODOLOGY.............................................................................................................44

CHAPTER 4.......................................................................................................................48

RESULTS AND DISCUSSION........................................................................................48

4.1: How can AI be dangerous.......................................................................................48

4.3: Why the recent interest in AI safety??....................................................................49

4.4: Topics Covered by AI100.......................................................................................50

4.5 Robotics and Ai........................................................................................................52

4.6 Programming Language Of Ai.................................................................................55

4.7 Machine Vision........................................................................................................58

4.8 Impact of Machine Vision........................................................................................58

4.9 Ethical and Legal Question of Ai.............................................................................61

4.9.1 Ethical issues in Artificial intelligence..................................................................61

4.9.2 Threat to Privacy:..................................................................................................61

4.9.3: Threats to security and weaponization of AI........................................................62

4.9.4 Economics and Employment Issues......................................................................63

vi
4.9.5 Human Bias in Artificial Intelligence....................................................................63

4.10 Legal Issue and Question of Ai..............................................................................64

4.11: Civil Rights for AI and Robots..............................................................................65

4.12 Limitation and Opportunities of Ai........................................................................65

4.13 Intelligence as a multi-component model:.............................................................66

4.14 Large datasets and hard generalization:.................................................................66

4.15 Black box and a lack of interpretation:...................................................................66

4.16 Robustness of AI:...................................................................................................66

Chapter 5............................................................................................................................68

Conclusion and recommendation.......................................................................................68

REFERENCES...................................................................................................................70

vii
Abstract
As information gets to be more promptly accessible, people will progressively depend on
AI frameworks to live, work, and appreciate themselves. AI frameworks will be used in a
developing number of businesses, counting keeping money, vitality, fabricating,
instruction, transportation, and open administrations, as their precision and advancement
progress. The period of improved insights is expected to be another arrange of AI.Will AI
bring in a revolution that advances in super intelligence that surpasses all human
intelligence?
The beginnings of AI, its development over the past 60 a long time, and related subfields
such as machine learning, computer vision, and the rise of profound learning are all
secured in this article. It presents a coherent understanding of AI's numerous seasons.
Alongside AI's unparalleled ubiquity, there are concerns with respect to the technology's
influence on society. To ensure that society as an entire benefits from AI's development
which its conceivable negative impact is diminished from the begin, a clear arrange must
be created that considers the going with moral and legitimate issues. In order to do this,
the paper examines the ethical and legal problems surrounding AI, including privacy,
jobs, legal accountability, civil rights, and the unlawful use of AI for military objectives.

viii
CHAPTER1
INTRODUCTION
Studies of artificial intelligence and robotics base their theory and analysis on constructs
of automation, robotics, artificial intelligence and machine learning, and automation. In
this body of literature, use of robotics, artificial intelligence, and machine learning
technologies can be used both as independent and as dependent variables—as dependent
variables to examine factors that encourage or discourage the adoption and use of these
technologies and independent variables to see how the use of these technologies impacts a
variety of outcomes, such as effects on labor, productivity, growth, and firm organization.
It is important that organizational scholars carefully define any such constructs in their
studies and to avoid confusing these related but distinct constructs. The definitions below
are meant to be a helpful first step in such an endeavor.
Manufactured insights (AI) could be a term utilized to depict a department of investigates
that points to donate robots the capacity to execute exercises such as rationale, thinking,
arranging, learning, and recognition. In spite of the truth that this definition notices
"machines," it may be expanded to "any frame of living judgment skills." Additionally,
insights may be expanded to incorporate an interleaves collection of capacities, such as
imagination, passionate information, and self-awareness, because it is found in primates
and other exceptional creatures. The field of "typical AI," which was noticeable until the
conclusion of the 1980s, was personally associated with the word AI. Sub symbolic
strategies like as neural systems, fluffy frameworks, developmental computation, and
other computational models started to pick up noticeable quality in arrange to overcome a
few of the limits of typical AI, coming about within the state "computational insights"
creating as a subfield of AI. Nowadays, AI alludes to the whole idea of a computer that's
shrewdly in terms of both operational and societal implications. Russell and Norvig
proposed a practical definition: “Artificial engineering is the insights of artificially copied
human insights and exercises that display a reasonable sum of rationale to their design.”
[1].

1.1 Robotics
The International Federation of Robots (IFR), an international industrial group focused on
commercial robotics, defines an industrial robot as an “automatically controlled,
reprogrammable, multipurpose manipulator, programmable in three or more axes, which
can be either fixed in place or mobile for use in industrial automation

1
applications.” While this definition is a starting point, other robot cists may differ on
dimensions such as whether a robot must be automatically controlled or could be
autonomous or whether a robot must be reprogrammable. At a broader level, any machine
that can be used to carry out complex actions or tasks in an automatic manner may be
considered a robot.

1.2 Artificial intelligence and machine learning


Similar to robotics, artificial intelligence is a construct with varying definitions and
potentially broad interpretations. For starters, it is useful to distinguish between general
and narrow artificial intelligence (Broussard 2018). “General artificial intelligence” refers
to computer software that can think and act on its own; nothing like this currently exists.
“Narrow artificial intelligence” refers to computer software that relies on highly
sophisticated, algorithmic techniques to find patterns in data and make predictions about
the future. In this sense, the software “learns” from existing data and hence is sometimes
referred to as “machine learning” but this should not be confused with actual learning.
Broussard (2018) writes that “machine ‘learning’ is more akin to a metaphor…: it means
that the machine can improve at its programmed, routine, automated tasks. It doesn’t
mean that the machine acquires knowledge or wisdom or agency, despite what the
term learning might imply [p. 89].”
Many applications of machine learning focus on prediction and estimation of unknowns
based on a given set of information (Athey 2018; Mullainathan and Spiess 2017). There
are a variety of algorithms that can be used for this machine learning. Some of these
techniques are relatively straightforward uses of logit models which would be familiar to
most organizational scholars, whereas others involve highly sophisticated algorithms that
attempt to mimic how a human brain looks for patterns in data (the latter are called
“neural networks”). Artificial intelligence technology can be used towards a variety of
purposes, including playing abstract strategy games such as Chess or Go; to playing real-
time video games such as Atari, Asterix, or Crazy Climber; to image or street number
recognition; to natural language translation; and many other uses.

1.3 Automation
Automation refers to the use of largely automatic, likely computer-controlled, systems
and equipment in manufacturing and production processes that replace some or all of the
tasks that previously were done by human labor. Automation is not a new concept, as
innovations such as the steam engine or the cotton gin can be viewed as automating
previously manual tasks. One of the concerns for scholars in this area revolves around

2
how and in what contexts increased use of robotics and artificial intelligence technology
may lead to increased automation, and the impact that this form of increased automation
may have on the workforce and the design of organizations.

1.4 Disentangling artificial intelligence, robotics, and automation


While artificial intelligence, robotics, and automation are all related concepts, it is
important to be aware of the distinctions between each of these constructs. Robotics is
largely focused on technologies that could be classified as “manipulators” as per the IFR
definition, and accordingly, more directly relates to the automation of physical tasks. On
the other hand, artificial intelligence does not require physical manipulation, but rather
computer-based learning. The distinction between the two technologies can become
fuzzier as applications of artificial intelligence may involve robotics or vice versa. For
example, “smart robots” are robots that integrate machine learning and artificial
intelligence to continuously improve the robots’ performance.
Both artificial intelligence and robotics technologies are capable of automation. However,
an open question is how and whether the effects of automation may differ across the two
technologies. Some scholars contend that computerization and the increased use of
artificial intelligence have the potential to automate certain non-routine tasks compared to
the more rote tasks previously subjected to automation (Frey and Osborne 2017; Autor et
al. 2006). Accordingly, it is possible that technologies incorporating artificial intelligence
may be able to automate far more tasks than pure robotics-based technologies.
Importantly, even though a technology such as artificial intelligence or robotics may
automate some of the tasks previously done by human labor, it does not necessarily imply
that the human has been automated out of a job. In many cases, a computer or robot may
be able to complete relatively low-value tasks, freeing up the human to focus efforts
instead on high-value tasks. In this sense, artificial intelligence and robotics
may augment the work done by human labor.

1.5 Distinction from information and communication technology


In addition to the distinction across the concepts of robotics, artificial intelligence, and
automation, we additionally draw readers’ attention to the contrast between artificial
intelligence and robotics, and computerization and information technologies more
generally. Similarly to robotics and artificial intelligence, information and communication
technology (ICT) has been of interest to researchers and policymakers with regards to
both its potential to increase productivity and its ability to affect labor (e.g., Autor et
al. 2003; Bloom et al. 2014; Akerman et al. 2015). However, while artificial intelligence

3
and robotics may reduce the cost of storing, communicating, and transmitting information
much like ICT, they are distinct. ICT can refer to any form of computer-based
information system (Powell and Dent-Micallef 1999), while artificial intelligence and
robotics may be computer-based but are not necessarily information systems. This
distinction can be especially difficult to navigate given the broadness and variation in the
definitions used for robotics and artificial intelligence in the literature. Again, we urge
organizational scholars to carefully define any of these constructs in their studies.
“As AI systems' capabilities and complexity create, they will be utilized in a bigger run of
businesses, counting back, wellbeing, vitality, fabricating, instruction, transportation, and
open administrations. The following step of AI is the time of expanded insights, which
consistently coordinating human and machine intellect.”

1.6 Future research directions for organizational scholars:


There are a number of topics related to robotics, artificial intelligence, and automation
that would benefit from research by organizational scholars. For example, the popular
press tends to associate artificial intelligence and robotics with substitution, in part
because of an assumption that productivity gains are at the expense of labor. The
evidence does not support this conclusion, however. For example, Furman and Seamans
(2019) show that there is no correlation between a country’s labor force and its
productivity, and Autor and Salomons (2018) show that while productivity growth may
have a negative employment effect on the sector that experiences the growth, this is more
than made up for by employment gains in related sectors.
More generally, there are reasons to expect that artificial intelligence will have
complementary effects on labor. This has been the case for prior episodes of automation
—for example, Bessen (2015) describes how the adoption of ATM machines by banks
was associated with an increase in bank employment and early evidence suggests it will
be the same for artificial intelligence. Bessen et al. (2018) provide survey evidence that
software sold by artificial intelligence startups is designed in most cases to augment the
work that humans do. According to their findings, artificial intelligence startups are most
likely to provide technology that helps their customers “make better predictions or
decisions”, “manage and understand data better”, and “gain new capabilities to improve
services or provide new products.” It is notable that these are all related to management
and strategy. Given the dramatic impact that these technologies could have on labor and
society, it is vitally important to have a clear understanding of the relationship between
artificial intelligence, robotics, and labor. This is one area that we believe would greatly

4
benefit from research by organizational scholars, who are adept at describing mechanisms
affecting the organization of work.
There are a variety of other questions surrounding artificial intelligence and robotics that
we encourage organizational scholars to turn to. One topic that has yet to be explored in
much detail surrounds the establishment and firm-level consequences for adoption of
artificial intelligence and robotics technology. Research could examine performance
consequences as well as outcomes related to firm organization and strategy. Scholars can
study in what circumstances and in what kinds of firms such adoption has the greatest
impact. Additionally, adoption of these technologies within a firm may have
consequences for the adopting firm as well as other firms in the industry, including firms
upstream and downstream from the focal firm. The adoption of the technology itself can
be viewed as an outcome, and scholars can examine what circumstances and factors
encourage or discourage the use of these technologies. Certain industries, management
styles, or organizational forms may be particularly quick to adopt, and market level forces
may also impact the adoption decision. Industry and organizational factors may play a
role as well as the backgrounds of individuals and managers within organizations. Greater
work can be done to identify what factors contribute to adoption and differential effects
once technology is adopted.
Further, more specific to management scholars, we need a detailed understanding about
how artificial intelligence and robotics affect the nature of work. This includes not only
how artificial intelligence and robotics change a given type of work or occupation (for
example, by changing the relative importance of skills and tasks required for an
occupation), but also how artificial intelligence and robotics affect the way in which
individuals interact with each other in the workplace. That is, we suspect that these
technologies will change the type of work that we do, and also how that work is designed
and organized as part of a larger production system.
To put this into perspective, in the early 2000s, online communication allowed for the
creation of “virtual teams” (Jarvenpaa and Leidner 1999). Organizational scholars have
highlighted many of the ways in which virtual teams need to be managed differently than
non-virtual teams (e.g., Gilson et al. 2015; Kirkman and Mathieu 2005). Relatedly, we
believe that a deeper understanding of how artificial intelligence is affecting workplace
organization will help inform some of the economic studies of the effect of artificial
intelligence on labor. More broadly, artificial intelligence and robotics are likely to
substitute for labor in some cases, but complement labor in other cases. A better

5
understanding of how work is done in the future will help inform conditions under which
we can expect these technologies to be complementary to labor and when we should
expect labor substitution.
The adoption and use of artificial intelligence and robotics technology also raises
important questions with policy implications. Researchers can begin to examine the
distributional effects of technology adoption across different demographics and regions.
Feldman and Kogler (2010) show that industries, and even occupations within industries,
tend to be geographically clustered. Because of that, the consequences of artificial
intelligence and robotics may be far more pronounced in some geographies compared to
others. In addition, to industry- and occupation-based differences, other factors may
influence a company’s ability to take advantage of these technologies. For example, these
new technologies may have significant implications for entrepreneurs. Entrepreneurs may
lack knowledge of how best to integrate robotics with a workforce and often face
financing constraints that make it harder for them to adopt capital-intensive technologies.
In the case of artificial intelligence, entrepreneurs may lack data sets on customer
behavior, which are needed to train artificial intelligence systems.
In the case that artificial intelligence and robotics do substitute for labor in certain
industries or occupations, the labor market may look dramatically different from how it
does now, and significant work will need to be done to help prepare the next generation
of workers to adapt to the new environment. There will be a need to evaluate what skills
and tasks are still valuable in the labor market compared to skills and tasks that can now
be fully automated. This calls for a greater understanding of the worker experience in
firms and occupations affected by artificial intelligence and robotics to craft appropriate
worker education, job training, and re-training programs.

In this fully online, asynchronous program, you will gain real-world skills from expert
practitioners so that you can:

 Make AI "autonomous" in order to support organizational growth by connecting


to business analytics applications.
 Identify the strategic implications of Artificial Intelligence and Machine Learning
in business.
 Describe the AI and ML processes and how to translate them for business
applications.

6
 Consider the needs and concerns of employees and the “human” element in the
digital transformation process.
 Define the role of humans in the design and development of autonomous systems.
Start by watching the Navigating the Pitfalls and the Opportunities of AI and ML for
Business video, where you will meet your instructors for this seminar series:

 Gauthier Vasseur, Executive Director at the Fisher Center for Business Analytics
at UC Berkeley Haas School of Business
 Dan Feld, Head of Global Enterprise Business, Hardware Partnerships at Google
 Sarah Aerni, Senior Director Machine Learning + Engineering at Salesforce
 Robert Brown, Vice President, Center for the Future of Work at Cognizant
Artificial intelligence (AI) and robotics have become increasingly hot topics in the press
and in academia. In October 2017, Bloomberg published an article claiming that artificial
intelligence is likely to be the “most disruptive force in technology in the coming decade”
and warning that firms that are slow to embrace the technology may risk extinction.
Similarly, the following month, the Financial Times declared that the “robot army” is
transforming the global workplace. This interest is likely due to the rapid gains that
artificial intelligence has been making in some applications, such as image recognition
and abstract strategy games, and that advanced robotics has been making in labs, even
though widespread commercial applications may be lagging (Felten et al. 2018).
Scholars have been increasingly interested in the economic, social, and distributive
implications of artificial intelligence, robotics, and other types of automation. For
example, over the past 2 years, economists at the University of Toronto have convened
conferences around the economics of artificial intelligence, which have been attended by
a dazzling array of economics scholars from diverse point of views including Nobel Prize
winners Edmund Phelps, Paul Romer, Joseph StiglitSome research has taken a morez,
and others. There are a number of well-attended conferences for legal, manufacturing,
technical, and general-interest communities such as the World Conference on Robotics
and Artificial Intelligence, WeRobot, and AI Now.
Organizational scholars are a bit late to the game and have only just started to focus on
the organizational implications of artificial intelligence, robotics, and other types of
advanced technologies. However, as we describe in this primer, we believe that these
technologies present a unique opportunity for organizational scholars. Periods of great
technological change can bring about great progress but also great turmoil. For example,
7
while the steam engine led to great economic growth (see, e.g., Crafts 2004) it also led to
job displacement. It is important for organizations to understand and anticipate the effects
that artificial intelligence, robotics, and other types of automation may have, and design
themselves accordingly. While many lessons can be drawn from prior episodes of
automation, it is possible that artificial intelligence and robotics may have unique
consequences. Differences from prior episodes of automation include that (1) the nature
of business activity has shifted dramatically over the past decade such that many
businesses now rely on platform (i.e., 2-sided market) business models, (2) artificial
intelligence is likely to affect white-collar workers more so than blue-collar workers
(while perhaps robotics may affect blue-collar workers more than white-collar workers),
and (3) artificial intelligence may affect the links between establishments and firms (e.g.,
monitoring and firm scope).
This article is a primer on artificial intelligence, robotics, and automation. To begin, we
provide definitions of the constructs and describe the key questions that have been
addressed so far. We discuss implications of these technologies on organizational design,
then describe areas in which organizational scholars can make substantial contributions to
our understanding about how artificial intelligence and robotics are affecting work, labor,
and organizations. We also describe ways in which organizational scholars have been
using artificial intelligence tools as part of their research methodology. Finally, we
conclude with a call for more research in this fertile area.
When the primary calculator machines were created, from Babbage's mechanical
calculator to Torres-electromechanical Quevedo's calculator, the computer was born. The
beginnings of automata hypothesis can be followed back to the so-called "codebreakers"
of World War II. Without knowing the position of the rotor, the number of operations
required to translate the German trigrams on the Conundrum machine demonstrated as
well troublesome to unravel physically.
Marvin Minsky, John McCarthy, and two senior researchers from IBM, Claude Shannon
and Nathan Rochester, at first made the term AI in 1956. The term "Fake Insights" was
initially coined as the field's moniker amid this conference. The Dartmouth assembly
introduced in a unused age of investigation and unbridled interest of unused information.
Most individuals respect the computer software engineers of the period as "astonishing";
they unraveled logarithmic issues, illustrated geometric hypotheses, and learned to talk
English.

8
From 1980 to 1987, corporations used AI programmes known as "expert systems," and
knowledge acquisition became the major focus of AI research. With its fifth-generation
computing project, the Japanese government started a large financing effort on AI at the
same time. The work of John Hopfield [2] and David Rumelhart [3] helped to resurrect
connectionism.
Apple and IBM desktops have relentlessly developed in speed and control, and by 1987,
they were quicker and more capable than the speediest Drawl computers accessible. The
term "shrewdly specialist" was coined within the 1990s [4]. A framework that watches its
environment and takes steps to make strides its chances of victory is known as an
specialist. For the primary time, the term of agents conveys the thought of aware
substances participating to attain a shared objective. The objective of this modern
worldview was to imitate how people collaborate in bunches, associations, and/or society.
Cleverly specialists turned out to be a more flexible definition of insights.Measurable
learning from numerous perspectives, counting probabilistic, frequents, and possibility
(fluffy rationale) strategies, were connected to AI within the late 1990s to manage with
choice instability. This introduced in a modern period of compelling AI applications,
much past what master frameworks had fulfilled within the 1980s. These unused thinking
strategies were more adjusted to managing with the unusualness of brilliantly operator
states and recognitions, and they had a noteworthy impact on the field of control.High-
speed trains with fluffy rationale control were made amid this period [5], as were
numerous other mechanical applications (e.g. plant valves, gas and petrol tank checking,
mechanized adapt transmission frameworks, and reactor administration in control plants),
as well as cleverly residential items (e.g. air-conditioners, warming frameworks, cookers
and vacuum cleaners).
“Since the birth of Big Data in 2000, a third renaissance of the connectionism paradigm
has emerged, spurred by the increasing use of the internet and mobile communication.”
Neural systems were given another see, this time for their work in expanding perceptual
insights and evacuating the require for include designing. Computer vision has moreover
made noteworthy advance, with enhancements in visual recognition and the capacity of
brilliantly specialists and robots to execute progressively complicated assignments, as
well as visual design acknowledgment. All of these advancements opened the entryway
for future AI issues counting discourse acknowledgment, characteristic dialect handling,
and self-driving automobiles. Figure 1 delineates a chronology of critical occasions in
AI's history.

9
Figure 1: A timeline depicting some of the most significant AI events since the 1980s. The blue boxes
reflect events that have influenced AI development in a favorable way. Those having a negative influence,
on the other hand, are depicted in purple and represent low moments in the field's progress, i.e. the so-called
"AI winters (4-5)"
AI is being used in web publicizing, driving, flying, wellbeing, and picture recognizable
proof for individual offer assistance. AI's later accomplishment has provoked the
intrigued of both the logical community and the common populace. Vehicles having a
mechanized controlling framework, frequently known as independent automobiles, are
one case of this. Each vehicle is prepared with a set of lidar sensors and cameras that
permit it to perceive its three-dimensional environment and make brilliantly moving
choices in changeable, real-time activity circumstances. The AI robot Tay, made by
Microsoft and adapted for social organizing discourses, could be a later case. It had to be
turned off before long after it was discharged since it couldn't tell the contrast between
positive and terrible human contact. In terms of passionate insights, AI is essentially
confined. As it were crucial human enthusiastic states such as outrage, joy, distress, fear,
torment, pressure, and lack of bias may be identified by AI. One of the another
wildernesses of expanding levels of customization is passionate insights. Genuine and full
counterfeit insights does not however exist. At this organize, AI will be able to simulate
human cognition to the degree where it'll be able to dream, think, involvement feelings,
and set its claim objectives. In spite of the fact that there's no sign that genuine AI will

10
rise some time recently 2050, the computer science concepts that drive AI ahead are
rapidly advancing, and it is basic to dissect its impact not fair from a innovative, but
moreover from a social, moral, and lawful position.

11
CHAPTER 2
REVIEW OF LITERATURE
(Chen, X. 2020) Artificial intelligence (AI) assisted human brain research is a dynamic
interdisciplinary field with great interest, rich literature, and huge diversity. The diversity
in research topics and technologies keeps increasing along with the tremendous growth in
application scope of AI-assisted human brain research. A comprehensive understanding
of this field is necessary to assess research efficacy, (re) allocate research resources, and
conduct collaborations. This paper combines the structural topic modeling (STM) with
the bibliometric analysis to automatically identify prominent research topics from the
large-scale, unstructured text of AI-assisted human brain research publications in the past
decade. Analyses on topical trends, correlations, and clusters reveal distinct
developmental trends of these topics, promising research orientations, and diverse topical
distributions in influential countries/regions and research institutes. These findings help
better understand scientific and technological AI-assisted human brain research, provide
insightful guidance for resource (re)allocation, and promote effective international
collaborations.

(Abresch, J. 2008) The advent of machines power-driven by Artificial Intelligence (AI)


have strongly influenced the world in the 21 st century. The future of AI is promising and
is offering a wide range of opportunities for scholars and academics. Although the theme
has received a considerable attention over the last years, much has been speculated and
little is known about its impacts on the Public Administration. Thus, the purpose of this
article is to make the result of those impacts less ambiguous. To this end, we have
conducted a systematic review to provide a comprehensive analysis on the latest impacts
of AI on the Public Administration. Our intent is to narrow the field of study, while AI is
being continuously strengthened with new empirical evidences.
(Martínez, D. 2015) Project control and monitoring tools are based on expert judgment
and parametric tools. Projects are the means by which companies implement their
strategies. However project success rates are still very low. This is a worrying situation
that has a great economic impact so alternative tools for project success prediction must
be proposed in order to estimate project success or identify critical factors of success.
Some of these tools are based on Artificial Intelligence. In this paper we will carry out a

12
literature review of those papers that use Artificial Intelligence as a tool for project
success estimation or critical success factor identification.
(Sinner, A. 2006) With this review, we explore the practices of arts‐based educational
research as documented in dissertations created and written over one decade in the
Faculty of Education, University of British Columbia. We compile and describe more
than thirty dissertations across methodologies and methods of inquiry, and identifiy three
pillars of arts‐based practice – literary, visual, and performative. In this review, we trace
the beginnings of a new stream of practice that is interwoven in some of these
dissertations and underpins many of them: the methodology of a/r/tography. Four
attributes underpin this collection of dissertations: a commitment to aesthetic and
educational practices, inquiry‐laden processes, searching for meaning, and interpreting for
understanding.
(Aldosari, S. 2020) This study discussed the potential effects of artificial intelligence on
higher education at Prince Sattam Bin Abdulaziz University. To achieve this goal, a
qualitative research methodology was used by asking an open question on a sample of
academics. The results of the analysis showed that there is a decrease in the level of
awareness of the mechanisms of applying artificial intelligence, and that there is a need to
further spread awareness in The Saudi environment on the possibilities of using artificial
intelligence applications in education
(Byungura, J. 2015) With the emerging use of technological interventions in education, e-
learning systems con- tribute immensely in educational delivery. However, with
substantial efforts from the Rwandan Government, there were still claims about the lack
of online support systems for thesis process in Rwandan higher education, which
significantly affect the quality of re- search. Furthermore, previously implementations of
e-learning systems at University of Rwanda have failed because of a low adoption rate.
This study follows the introduction of the learning management system “SciPro” used for
supporting supervisors and students in thesis writing. The purpose of the study was to
understand the adoption of the SciPro Sys- tem in support of thesis process for bachelor
and master’s programs from a supervisor’s perspective at University of Rwanda (UR). An
embedded case study was used as a research strategy. The Unified Theory of Acceptance
and Use of Technology (UTAUT) was used as the theoretical frame of reference for the
study. Data was collected from 42 workshop par- ticipants using a questionnaire.
Moreover, convenient interviews and participant observations were conducted at 5 of the
6 colleges during and after system testing. A researcher re- alized that the current thesis
13
process is still manual-based and there is no holistic computer- supported system for
thesis related activities. Results from correlation analysis and regres- sion analysis for the
questionnaire showed that the facilitating conditions provided by UR were the key factor
that would influence the adoption of SciPro positively. Effort expec- tancy perceived by
supervisors proved to have a significant correlation to their Behavioral Intention to use
the system. The study also revealed that there were other factors outside SciPro System,
such as management support, Internet access, lack of a clear ICT policy and E-learning
policy; and to motivate innovators and early adopters that should be considered
throughout the implementation process to enhance adoption.
[ CITATION Sim14 \l 1033 ] Propose an Application of conent-based approach in research
paper recommender system of digital libraries. In this system author adopt content based
filtering technique. A broad range of digital items are available in digital libraries
(research papers, publications, journals, research projects, newspapers, magazines, and
past questions). Some digital collections also have millions of digital items to choose
from. As a result, getting or locating favorite digital objects from a vast selection of
accessible digital objects in the digital library is one of the most common issues that
library users face while using the library. Users need help in locating objects that are
relevant to their interests. TF-IDF (Term Frequency Inverse Document Frequency) and
cosine similarities algorithim are used for checking similarities of reseaech papers.
Research papers and user's query were represented as vectors of weights using Keyword-
based Vector Space model. Dataset obtained from internet. The established system's
findings were compared to the results of a digital library without a recommendation
feature and found to be right, as well as with additional features not present in the digital
library. As a result, the findings are consistent with the majority of the reviewed
literature. As a result, the research paper recommendation framework incorporated into
the digital library has several advantages over those that do not have a recommendation
function. According to the system's findings, incorporating suggestion functionality in
digital libraries would be beneficial to library users. Content-based approaches are not
affected by user reviews but depend on them.

(Johnson, W. 2016) The use of Artificial Intelligence (AI) is now observed in almost all
areas of our lives. Artificial intelligence is a thriving technology to transform all aspects
of our social interaction. In education, AI will now develop new teaching and learning
solutions that will be tested in different situations. Educational goals can be better

14
achieved and managed by new educational technologies. First, this paper analyses how AI
can use to improve outcomes in teaching, providing examples of how technology AI can
help educators use data to enhance fairness and rank of education in developing countries.
This study aims to examine teacher's and student's perceptions of the use and
effectiveness of AI in education. Its curse and perceived as a good education system and
human knowledge. The optimistic use of AI in class is strongly recommended by teachers
and students. But every teacher is more adapted to new technological changes than
students. Further research on generational and geographical diversity on perceptions of
teachers and students can contribute to the more effective implementation of AI in
Education (AIED).

[CITATION BSO13 \l 1033 ] Used the content-based method in the paper recommender
application. To quantify similarities between the user's question (features of user's
inquiry) and the attributes of the target documents, the author used a Jaccard similarity
coefficient or a Jaccard index. The suggestions proposed by the recommender system
were forwarded to the desired users via e-mail. The research paper recommendation
framework based on active user reviews was generated via a content-based approach.
This rating system does not rely on the ratings of other users but utilizes the content of the
item rather than the other user's rating. This article proposes an article recommender
framework that recommends or offers suggestions for the intended users based on the
articles the users have previously enjoyed. The guidelines would be consistent with the
expected users. The framework provides suggestions to active users based on their related
past behavior. This study suggested a recommendation algorithm based on paper users
have enjoyed. The efficiency of the proposed algorithm was found to be more superior to
that of the method in comparison. This can lead to better recommendations because the
user's interests can be considered when making recommendations.

(Tuomi, I. 2018) This report describes the current state of the art in artificial intelligence
(AI) and its potential impact for learning, teaching, and education. It provides conceptual
foundations for well-informed policy-oriented work, research, and forward-looking
activities that address the opportunities and challenges created by recent developments in
AI. The report is aimed for policy developers, but it also makes contributions that are of
interest for AI technology developers and researchers studying the impact of AI on
economy, society, and the future of education and learning.

15
[ CITATION HSa19 \l 1033 ] Provides a case study that highlights the use of probabilistic
topic models to produce recommendations for academic users by allocating and
supervising appropriate courses. The method suggested, dubbed as Scholar Lite, exploits
the strength of the teaching system to extract research themes from previous publications
of faculty members and exploit the research interests of their curricula and combine it
with their educational history to produce recommendations for teaching, monitoring
research, and collaboration between industry and academia. The industry has already used
collaborative filtering and content-based recommendation systems to recommend
products such as movies, news, restaurants, and items to the consumer, but nobody used
these off-shelf models to enhance the student experience and improve the quality of
higher learning in academia. Despite the tremendous interest shown by industry and
researchers in the topic, issues such as cold-start, sparsity, and scalability have plagued
recommender systems. Cross-domain topic models, including author topic model-based
collaborative filters, deal with sparsity and skewness in topics. With the popularity of the
deep learning paradigm, a range of deep learning-based methods, such as convolutional
neural networks and collaborative deep learning models, have been developed to design
recommender systems in conjunction with collaborative filtering techniques. The use of
deep learning techniques in conjunction with collaborative filtering addresses the issue of
cold start in recommender systems. To promote course and supervisor recommendations,
the proposed framework employs two common probabilistic topic models: Latent
Dirichlet allocation (LDA) and Author Topic Model (ATM). To approximate this
posterior distribution, they used the variational Bayes algorithm, which employs a
variational distribution. They developed their Dataset. Development of the region's first
supervisory and course recommendation scheme. While recommender systems have been
extensively studied around the world, to the best of our knowledge, no one has previously
investigated the use of topic models for supervisor and course recommendation tasks in
academia or industry. This work aims to deploy machine learning tools for the creation of
a recommender framework useful for academic faculty and students. They presented the
results of two common probabilistic models, LDA and ATM, on real-world data sets and
discovered that the generative output of LDA is significantly better than that of ATM;
however, ATM provides semantically more useful information than LDA and thus proves
more appropriate for the recommendation task at hand.

16
(Atabekov, A 2018) The paper explores current legal regulation on Artificial Intelligence
(AI) across countries. The research argues that special emphasis should be laid to the
prospective of treating AI as an autonomous legal personality, separate subject of law and
control. The article identifies major approaches in legislation and practice on state
regulation of AI and explores a number of current options: AI as a subject of law
introduced into national legislation without prior background, AI as a subject of law equal
to a person, and regulated or not regulated by separate rules of law, etc. The research
rested on qualitative approach to study. The materials included national and international
legislation, academic and media data. The study stood on the comparative legal analysis,
integrated legal interpretation and modeling. The research findings laid grounds for
preliminary recommendations on legal drafting with regard to AI status as that of
autonomous legal personality. They can be used for national legislation development,
further research on legal aspects of robotic AI.
Alharthi et al. (2017), propose a content-base recommendation for Book. The report
results from the BookCrossing, LitRec, LibraryThing, INEX, and Amazon research
methodology of CBF, CF, and other publications. In this analysis, we will concentrate on
the good books-10k data set models for Good Reads. Collected from over 30 articles until
2017, present a comprehensive survey of various approaches to book recommendations.
The dataset contains 5,976 five thousand nine hundred seventy-six thousand nine hundred
and seven ratings. The 80% of randomly sampled ratings were applied to the training
data, which resulted in the training set having 4, SCD training data which has 1,682,703;
the 20% was held for the test set, resulting in the test set containing 1,296,173 test
examples. While multiple validations take longer, we chose this approach as the more
effective one for expanding the dataset. Several variables can indicate outliers in a much
more significant amount of data. Recent publications on this dataset are primarily
concerned with collaborative filters. Out of the 11 specific papers in English on Google
Scholar's recommendation systems for good books-10k, two implement hybrid systems,
and only one implements a primary content-based recommender. There is a wide range of
metrics available, including those for rating predictions (root mean squared error
(RMSE)), ranking metrics (precision@k, recall@K, mean average precision (MAP)),
coverage metrics (catalogue coverage (CC)), and metrics for personalization, diversity,
and novelty. Algorithms are examining for Collaborative filtering. We will look at the
content or hybrid system components, built on good books-10k, and compare it with
systems that use another Good Reads dataset – Lit Rec.

17
(Oke, S. A. 2008) Research on artificial intelligence in the last two decades has greatly
improved performance of both manufacturing and service systems. Currently, there is a
dire need for an article that presents a holistic literature survey of worldwide, theoretical
frameworks and practical experiences in the field of artificial intelligence. This paper
reports the state-of-the-art on artificial intelligence in an integrated, concise, and elegantly
distilled manner to show the experiences in the field. In particular, this paper provides a
broad review of recent developments within the field of artificial intelligence (AI) and its
applications. The work is targeted at new entrants to the artificial intelligence field. It also
reminds the experienced researchers about some of the issue they have known.
Rosati (1999) in a paper, conceptualise the minimal belief and negation as failure
(MBNF) in its prepositional fragment as introduced by Lifschitz. The concept can be
considered as a unifying framework for several non-monotonic formalisms, including
default logic, auto epistemic logic, circumscription, epistemic queries and logic
programming. The application of soft computing theory is vast in the reasoning literature.
One of such studies was carried out by Straccia (2001) on reasoning within fuzzy
description logics. The paper presents a fuzzy extension of ALC, combining Zadeh’s
fuzzy logic with a classical DL. The work supports the idea of managing structured
knowledge with appropriate syntax, semantics and properties on constraint propagation
calculus for reasoning in it.
Singer et al. (2000) introduce the backbone fragility and the local search cost peak. The
authors introduce a temporal model for reasoning on disjunctive metric constraints on
intervals and time points in temporal contexts. This temporal model is composed of a
labeled temporal algebra and its reasoning algorithms. The computational cost of
reasoning algorithms is exponential in accordance with the underlying problem
complexity, although some improvements were proposed.
On diagnosis, Console et al. (2003) extend the approach to deal with temporal
information. They introduce a notion of temporal decision tree, which is designed to
make use of relevant information as long as it is acquired, and they present an algorithm
for compiling such trees from a model-based reasoning system. A noteworthy study that
considers independence was embarked upon by Lang et al. (2003). Two basic forms of
independence, namely, a syntactic one and a semantic one are treated. They also consider
the problem of forgetting, i.e. distilling from a knowledge base only the part that is
relevant to the set of queries constructed from a subset of the alphabet.

18
Enkh-Amgalan Baatarjav et al.[CITATION Baa08 \l 1033 ] Developed a group recommender
application for users on Facebook. The system used hierarchical clustering and decision
method to make recommendations regarding the Facebook group(s), which were similar
to the interest of Facebook users. This system first obtained the profile data of Facebook
users in the University of North Texas and utilized it as test data. We developed a
community recommendation system (GRS) using hierarchical task-based and decision
trees. To assess the output of GRS, we used half of the data for training and half of the
data for testing. We randomly picked the group labels and split members into two groups.
The accuracy rate is determining by the ratio of accurate clustered members to the total
number of testing members. In Figure 4, the accuracy of GRS is comparing to the
accuracy of clustering without silence elimination noise for noise reduction. Without
clustering, accuracy was 64%. After performing noise removal using the clustering
coefficient process, the average accuracy improved to around 73 percent. So, the average
accuracy of surveys improved by 9%. In addition, of the 1580 members or 343 members
found to be noise, 32 percent were eliminated.

Zhibo Wang et al. [CITATION Wan14 \l 1033 ] explored that a Facebook recommender
application offers suggestions or recommendations about friends on Facebook as "people
you may know". The basis of such recommendations is upon several aspects like familiar
friends, knowledge about work and education, groupings you share in or you belong to,
links you find via friends finder, and so on. Facebook users' profiles were used as
suggestions. The author explored that Amazon.com, CDNOW.com, MovieFinder.com,
and Reel.com utilize the object-to-object association to suggest the clients seeking advice.
Amazon's customers ordered search results by clicking on the favorite items they wanted
to buy. Similarly, CDNOW.com recommends album names to its guests. On the same
pattern, movie matchmaking websites like MovieFinder.com and Reel.com
recommendation of movies to internet clients.

(Zawacki-Richter, 2019) According to various international reports, Artificial Intelligence


in Education (AIEd) is one of the currently emerging fields in educational technology.
Whilst it has been around for about 30 years, it is still unclear for educators how to make
pedagogical advantage of it on a broader scale, and how it can actually impact
meaningfully on teaching and learning in higher education. This paper seeks to provide an
overview of research on AI applications in higher education through a systematic review.
Out of 2656 initially identified publications for the period between 2007 and 2018, 146

19
articles were included for final synthesis, according to explicit inclusion and exclusion
criteria. The descriptive results show that most of the disciplines involved in AIEd papers
come from Computer Science and STEM, and that quantitative methods were the most
frequently used in empirical studies. The synthesis of results presents four areas of AIEd
applications in academic support services, and institutional and administrative services: 1.
profiling and prediction, 2. assessment and evaluation, 3. adaptive systems and
personalisation, and 4. intelligent tutoring systems. The conclusions reflect on the almost
lack of critical reflection of challenges and risks of AIEd, the weak connection to
theoretical pedagogical perspectives, and the need for further exploration of ethical and
educational approaches in the application of AIEd in higher education.
(Melania Berbatova) create a Book Recommendation System. In this recommendation
system, they use Hybrid approaches. Matrix factorization algorithms use decomposes the
user-item matrix into a product of two lower-dimensional matrices. They revealed state-
of-the-art performance in the Netflix competitionThe problem face content-based
approach was that we could not train algorithms with large feature sets because there
were memory mistakes when we conducted our experiments locally and in standard cloud
services, like CodeLab and Kaggle. To solve that problem, NLP is used for improvement.
Many natural language processing techniques include extracting lexical, syntactic, and
stylometric functions or text integrations in content-based filtering for book
recommendations. The CBF systems using these techniques can be used individually or in
a co-operatively filtered combination. One of the issues is the lack of standardization
found in publications that have come under fire because there is an absence of accepted
datasets and metrics in most of the use. First, on the other hand, some people go too
rigorously with "Like" getting a rating of 3 stars while others, for whatever reason, do not
bother to clarify with an explicit "Like" statement." Secondly, some of these rating
systems do not affect those with less than five stars, while others exclude those with less
than 10. Thus, lastly, the dataset is unfinished and does not include proven training and
test sets. Hence, various researchers end up using completely different ways to allocate
and separate datasets and run tests.

Marko Takalcic developed a content-based recommender system for images. This paper
examines the effect of affective metadata on the output of an image recommendation
system based on content. The increasing number of multimedia content makes finding
appropriate content difficult for end-users. The techniques recommended aim to help

20
users find a small subset of multimedia items that are important for each person. Several
deployment solutions are available, such as the TiVo system or the Netflix system. Sport
Vector Model SVM was used in the recommender system. CBR systems based on GM
are delicate. They might even be better, though. Other CBR recommender systems have
also delivered positive performance but cannot provide an apples-to-to-apples analysis of
metadata comparison with metadata. Such comparative research substantiates the
hypothesis that AM has a superior ability to differentiate relevant data from non-relevant
data. Many things examine AM in CBR systems, including how unique metadata fields
and algorithms impact the results. Results showed the implementation of CBR for photos
brings about a significant change. We provided a simple but practical yet statistical model
for modeling item and user value using two steps in the users' valence-rationality space.
Additionally, our experiments show that the Support Vector Machine is a strong
candidate for calculating items' ratings.

Yoshio Ishizawa[ CITATION YIs11 \l 1033 ] Propose and test for the required user
community a system for recommending discussion topics. Conversation with unknown
persons is difficult because partners, particularly new acquaintances, have unknown
backgrounds, histories, etc. For this purpose, it helps them to have a conversation under
these conditions Conversation management can be supported by recommending suitable
topics of conversation. Lots of tools are proposed in both real and virtual social spaces to
enable you to communicate with people and make friends. Demonstrate and define a
proposal and its accompanying method for conversational recommendation to the right
community of users The subject model makes use of topics and user interest. Topics are
then organized by the similarities, and then appropriate groups are extracted for each
cluster. Topic recommendations system consist of two databases and three construction
steps. One database is used for user preference and the second one is used for topic. Three
construction steps for a topic recommendation. Topic Vector Preference Construction the
preferred topic is designed to measure similarities between the themes of vectors and
vectors of user preference. Cluster Topic Construction the topic cluster is based on the
similarity of the thematic vectors and the k-means algorithm etc. Users are extracted from
the topic cluster. Now topics are recommended to the user. The approach uses parallels
between the subject and consumer preference in all combinations. A subject preferential
vector is generated by the user as an element and the similarity as the value is built for
each subject and the subject is clustered according to the similarities, using the subject

21
preferential vectors. The user groups are then extracted from the subject clusters and the
relevant topics for the user groups are suggested. Easy experiment for validating user
group build effectiveness, comprehensive extraction of user preference, and adequate
selection of the subject. We, therefore, affirm the following trends: Users in the same user
community will contribute to a more comfortable discussion if the user is aware that the
conversation partner likes the same types of topics. The topics provided by the proposed
topic are suitable for the target consumer, though the extraction of interests was
insufficient.

(Bravo, C 2014) Artificial intelligence (AI) has been used for more than two decades as a
development tool for solutions in several areas of the E&P industry: virtual sensing,
production control and optimization, forecasting, and simulation, among many others.
Nevertheless, AI applications have not been consolidated as standard solutions in the
industry, and most common applications of AI still are case studies and pilot projects. In
this work, an analysis of a survey conducted on a broad group of professionals related to
several E&P operations and service companies is presented. This survey captures the
level of AI knowledge in the industry, the most common application areas, and the
expectations of the users from AI-based solutions. It also includes a literature review of
technical papers related to AI applications and trends in the market and R&D. The survey
helped to verify that (a) data mining and neural networks are by far the most popular AI
technologies used in the industry; (b) approximately 50% of respondents declared they
were somehow engaged in applying workflow automation, automatic process control,
rule-based case reasoning, data mining, proxy models, and virtual environments; (c)
production is the area most impacted by the applications of AI technologies; (d) the
perceived level of available literature and public knowledge of AI technologies is
generally low; and (e) although availability of information is generally low, it is not
perceived equally among different roles.
This work aims to be a guide for personnel responsible for production and asset
management on how AI-based applications can add more value and improve their
decision making. The results of the survey offer a guideline on which tools to consider for
each particular oil and gas challenge. It also illustrates how AI techniques will play an
important role in future developments of IT solutions in the E&P industry.

[ CITATION Che10 \l 1033 ] Suggested that through subject model-based approaches, we


may incorporate thematic similarity measurements as an alternative to traditional item-

22
based approaches. For this recommendation system, collaborative filtering is used. The
Latin Dirichlet Allocation (LDA) model is generative for documents. Postgraduate
students, professors, and other scholars in universities, and academic institutions must
find the papers that are most important to their research projects. As a result, finding the
right articles to read becomes an essential part of their academic lives. A research paper
recommender system can help these people find the most important papers while saving
them time. A recommender system mainly uses two types of information: user reviews
for products and/or user and/or item profiles. We used topic modeling techniques to
perform topic analysis of research papers and we implemented the inter-topic similarity.
By integrating the themes and an item-based approach, we could achieve better
recommendations and reduce the cold start problem. Highly important papers and help to
overcome the cold start issue even if the consumer only rates few articles, our
recommendations are reliable.

(Duch, W. 2003) Computational Intelligence. Computational intelligence focuses on


problems that theoretically only humans and animals can solve, problems requiring
intelligence. It is a branch of computer science studying problems for which there are no
effective computational algorithms. The term acts as an umbrella under which more and
more methods have been added over time.
(Taylor, B. 2006) Neural Networks (Back-propagation, Hybrid, Recurrent, Self-
organizing Maps). This is one of the most widely used AI techniques with many journals
and books dedicated to its study (Appendix A) and numerous related conferences. There
are several artificial neural network software tools for developing applications, and some
of them are designed for industrial use. A Matlab toolbox is also available. The main use
of a neural network is as an all-purpose (hence its popularity) nonlinear function
approximator, for modeling and classification tasks. The development of a neural network
usually requires large amounts of data to ensure spanning of a large enough area for an
application and use of prior knowledge for structuring a neural network is not uncommon.
It should also be mentioned that a crucial feature of neural networks, namely their ability
to be trained and to compute using parallel computation, is hardly ever capitalized on in
most engineering applications, which perform computations on standard serial machines
(e.g., PCs). Applications of neural networks have been in pattern recognition, virtual
sensors, process control, prediction, and modeling, among others. A criticism of neural
networks is that they are "black boxes" (i.e., it is difficult to determine exactly why a

23
neural net produces a particular result). Certain neural network applications have
produced very valuable results within certain ranges but have ceased working and giving
good results without explanation. Management usually perceives neural network as AI,
and, therefore, the failure of neural networks has had a negative impact on the
management's perception of the potential of AI in the Post Graduate.
(Bravo, C. 2012) Fuzzy Logic. This is a technique for representing inexact linguistic
arguments and making inferences based on them. Nearing 50 years since its inception, it
is perhaps the most widely used technique in daily activities. Refrigerators, washing
machines, and automobile suspension systems are some of its applications. This AI
technique has also many journals and books dedicated to its study (Appendix A) and
many related conferences. There are software packages for developing applications that
employ fuzzy logic, and some of them are designed for industrial use. A MATLAB
toolbox is also available. The main applications of fuzzy logic have been in pattern
recognition, virtual sensors, automatic control, prediction, and modeling, among others.
(Zebulum, R 2018) Evolutionary Computation. Evolutionary computation is the
collective name for a range of problem-solving techniques based on principles of
biological evolution, such as natural selection and genetic inheritance. These techniques
are being increasingly widely applied to a variety of problems, ranging from practical
applications in Post Graduate and commerce to leading-edge scientific research. Here is a
list of the most popular technologies. Neural networks, genetic algorithms and intelligent
agents are often classified as machine learning techniques. Agents may use co-occurrence
matrices to learn how the attributes in data sets are related. Agent memories can be used
in various ways for diagnosis, for pattern recognition in multichannel signal data, and for
workflow monitoring. In contrast to neural networks, associative memories are "white
boxes" they can be configured to explain their decisions. Stephenson et al. (2010)
describe the use of an associative memory for gas lift well diagnosis. The machine
learning processes in intelligent agents entail human-directed "machine learning".
Genetic Algorithms. Genetic algorithms comprise a class of optimization techniques that
cleverly mimic the process of evolution (hence the term genetic) in a computer to let an
initial population of possible solutions converge to optimal solutions. While convergence
may be slow, there are no requirements on the structure (e.g., continuity, differentiability,
convexity, etc.) of the optimization problem to be solved. There are a few journals
dedicated to the exclusive study of genetic algorithms and some related conferences.
There are some software packages for developing applications that employ genetic

24
algorithms and some of them are designed for industrial use. A MATLAB toolbox is also
available. The main applications of genetic algorithms have been in optimization and
search activities, among others.
(Holzinger, A. 2016) Machine Learning. Machine learning refers to algorithms that allow
computers to learn behaviors by generalizing from data, often through reinforcement but
without supervision (i.e., without being told what the behavior to be learned should be ,
for example, learning how to play backgammon by playing lots of games and figuring out
winning strategies). Machine learning partially overlaps with data mining, but differs
from it in that the latter focuses on pattern discovery, while the former is mostly
concerned with producing desirable patterns. There are not many books and journals or
conferences purely dedicated to this topic. However, there is substantial literature on
machine learning in many disciplines.
(Brenner, W 2012)Intelligent Agents. Intelligent-agent systems are computational
systems comprising multiple agents which are capable of making decisions and taking
actions in an autonomous way (e.g., in the same way that individual car drivers maintain
traffic flow at a street intersection). Agents maintain information about their environment
and make decisions based on their perception about the state of this environment, their
past experiences, and their goals. Agents can also communicate with other agents and
collaborate to reach common objectives. The paradigm of intelligent agents is ideally
suited for systems that involve large amounts of data in physically distributed
environments. While it is possible to build intelligent agents that act autonomously, most
intelligent agent systems are designed to support rather than replace users. Intelligent
agent systems are particularly effective when there is a lot of data, when high degrees of
expertise are required, or when response timelines are very short.
(McArthur, S. D 2007) There are a number of research groups in the scientific community
working on intelligent agents and there are standards and applications for multiagent
system development. The most important standards for multiagent systems, such as the
Agent Common Language (ACL) and the FIPA Interaction Protocols, are supported by
the Foundation for Intelligent Physical Agents (FIPA), subscribed to the IEEE. Also,
there are important scientific journals specializing in intelligent-agent systems There are
several references about the use of multiagent systems in the industrial world, mainly in
the manufacturing Post Graduate (PABADIS, 2005)( Marik and Vrba, 2005). Common
applications are distributed decision-making systems and distributed control systems. In
the Post Graduate, there are few references about applications of multiagent systems;

25
three examples are the agent-based information management system for oil dispatch and
sales workflows (Ølmheim et. al 2008), the application of multiagent systems in subsea
facility modeling and the usage of agents in reservoir simulation history matching (Zangl
et al. 2011). Nevertheless, the application of intelligent agents in the Post Graduate is
being actively explored.
(Zhang, Z 2013) Swarm Intelligence. Swarm intelligence is an AI technique based
around the study of collective behavior in decentralized, self-organized systems.
Although there is normally no centralized control structure dictating how individuals
should behave, local interactions between those individuals lead to the emergence of
global behavior. Not many applications have been seen so far in the Post Graduate,
although there is a huge potential. Some papers have been published in the area of history
matching of simulation models (e.g.. Hajizadeh 2010). Data Mining. Data mining by
itself is not an AI technique; rather, it uses AI techniques together with statistics and other
formal techniques to find interesting features from data sets. Nowadays it is a well-
consolidated area with journals dedicated to its study (Appendix A) and some conferences
concerning that topic. There is some software for developing applications, some of it
designed by universities, and there is a MATLAB toolbox available. The main
applications of data mining have been in prediction, classification, and segmentation,
among others.
Rule-based Case Reasoning. This is not a different AI technique, because it does not
emulate different intelligent activities from those used by the other techniques. It can be
implemented using expert systems or fuzzy logic systems with a particular goal on case
reasoning on if-then rules and is based on similar past problems. For example, rule-based
case reasoning is often used in help-desk environments to support diagnosis of problems
with consumer products. There are very few journals, conferences, and books related
exclusively to this area, but it is a very common topic in more general AI events. In the
same way, the implementation could be done using software for other techniques, so there
are not many specific toolboxes. This technique can be widely used in diverse types of
applications including industrial, process, fault detection and isolation, prediction, and
any other area where there is knowledge available concerning the appropriate ways
previously used for solving related problems.
A Bayesian Networks: Bayesian networks are computer models of probabilistic systems
that is, real-world systems operating under uncertainty. Bayesian networks have been
applied successfully in the Post Graduate in many different areas. They are used in

26
diagnostics in process control, implemented in expert systems for probabilistic decision
support, and used for optimization. Standalone software tools are available. However,
most implementations are done in custom development projects.
B Expert Systems. Expert systems are the oldest artificial intelligence technique
according to applications development. Often, they are rule based. In essence, an expert
system is a programming paradigm, focusing on declarative rather than procedural
programming issues, namely how knowledge is represented and structured (e.g., in terms
of objects) rather than how elaborate computations are performed. It was the most widely
used AI technique during the 1970s and 1980s, spanning many areas of applications. In
the oil and gas Post Graduate, drilling operations management was the primary target of
the AI activity at that time. The intense interest in that time was followed by rapid
decline, as methodology frameworks were very restricted, and this almost made interest
in expert systems disappear during the early 1990s. Of course, as a tool for acquiring and
representing knowledge handled by a human expert, an expert system can be very useful
in a wide range of applications. Nowadays is a very well-consolidated area with journals
dedicated to its study (including publications from Elsevier and Wiley) and many
conferences dedicated to that area. There are many expert system software packages for
developing applications, and some of them are specifically designed for industrial use.
Expert systems have had diverse applications in health, Post Graduate, finance, security,
and fault detection and diagnosis, among other areas.
New players, such as GE, are penetrating the Post Graduate. They will bring significant
experience in the use of expert systems in the continuous surveillance and management of
rotating equipment. The provenance is from aircraft engines and locomotives. Is it of
interest that GE predictive analysis are still in grounded in usage-based maintenance
which by definition is parametric based and does not attempt condition-based
maintenance.
Automatic Process Control. Automatic process control is the most studied area of the
entire list presented here, with decades of experience and improvements. Strictly
speaking, it is not an AI technique but can use AI in some schemes. There is a well-
developed body of theory on automatic control, with several varieties placing particular
emphasis on various aspects of interest, including classical, robust, adaptive, model
predictive, and intelligent, among others. There are many associations around the world,
including IEEE and IFAC that have entire chapters dedicated to automatic control. There
are numerous journals and books on the subject (Appendix A) and a variety of

27
conferences concerning this area. There is also abundant computer software created for
developing applications for either educational or industrial use. A MATLAB toolbox and
Simulink are quite popular. There is ample experience on automatic control in many
industries that may share some characteristics with oil and gas (e.g., oil refining and
chemicals, aerospace, and automotive). Tools for activities that are essential for automatic
control, such as system identification, modeling, prediction, and optimization are well
developed.
The approach used in this analysis is a systematic literature review. The analysis process
is mainly based on Booth, Papaioannou, and Sutton's guidelines (2012).We also use
Mustak, Jaakkola, Halinen, and Kaartemo (2016)'s process, which has three stages:
literature scan, proof evaluation, and review and synthesizing findings.As information
gets to be more promptly accessible, people will progressively depend on AI frameworks
to live, work, and appreciate themselves. AI frameworks will be used in a developing
number of businesses, counting keeping money, vitality, fabricating, instruction,
transportation, and open administrations, as their precision and advancement progress.
The period of improved insights is expected to be another arrange of AI. Will AI bring in
a revolution that advances in super intelligence that surpasses all human intelligence?
Artificial Intelligence (AI) and robotics are likely to have a significant long-term impact
on higher education (HE). The scope of this impact is hard to grasp partly because the
literature is siloed, as well as the changing meaning of the concepts themselves. But
developments are surrounded by controversies in terms of what is technically possible,
what is practical to implement and what is desirable, pedagogically or for the good of
society. Design fictions that vividly imagine future scenarios of AI or robotics in use offer
a means both to explain and query the technological possibilities. The paper describes the
use of a wide-ranging narrative literature review to develop eight such design fictions that
capture the range of potential use of AI and robots in learning, administration and
research. They prompt wider discussion by instantiating such issues as how they might
enable teaching of high order skills or change staff roles, as well as exploring the impact
on human agency and the nature of datafication.
The beginnings of AI, its development over the past 60 a long time, and related subfields
such as machine learning, computer vision, and the rise of profound learning are all
secured in this article. It presents a coherent understanding of AI's numerous seasons.
Alongside AI's unparalleled ubiquity, there are concerns with respect to the technology's
influence on society. To ensure that society as an entire benefits from AI's development

28
which its conceivable negative impact is diminished from the begin, a clear arrange must
be created that considers the going with moral and legitimate issues. In order to do this,
the paper examines the ethical and legal problems surrounding AI, including privacy,
jobs, legal accountability, civil rights, and the unlawful use of AI for military objectives.
AI's development ought to not be hampered by overstated positive thinking or
unwarranted concerns. They ought to be used to motivate the creation of a deliberate
establishment on which AI's future may flourish. AI has the potential to modify our
society's future - our lives, our living situations, and our economy - with proceeded
financing and judicious venture.
The potential of Artificial Intelligence (AI) and robots to reshape our future has attracted
vast interest among the public, government and academia in the last few years. As in
every other sector of life, higher education (HE) will be affected, perhaps in a profound
way (Bates et al., 2020; DeMartini and Benussi, 2017). HE will have to adapt to educate
people to operate in a new economy and potentially for a different way of life. AI and
robotics are also likely to change how education itself works, altering what learning is
like, the role of teachers and researchers, and how universities work as institutions.
However, the potential changes in HE are hard to grasp for a number of reasons. One
reason is that impact is, as Clay (2018) puts it, “wide and deep” yet the research literature
discussing it is siloed. AI and robotics for education are separate literatures, for example.
AI for education, learning analytics (LA) and educational data mining also remain
somewhat separate fields. Applications to HE research as opposed to learning, such as the
robot scientist concept or text and data mining (TDM), are also usually discussed
separately. Thus if we wish to grasp the potential impact of AI and robots on HE
holistically we need to extend our vision across the breadth of these diverse literatures.
A further reason why the potential implications of AI and robots for HE are quite hard to
grasp is because rather than a single technology, something like AI is an idea or aspiration
for how computers could participate in human decision making. Faith in how to do this
has shifted across different technologies over time; as have concepts of learning (Roll and
Wylie, 2016). Also, because AI and robotics are ideas that have been pursued over many
decades there are some quite mature applications: impacts have already happened.
Equally there are potential applications that are being developed and many only just
beginning to be imagined. So, confusingly from a temporal perspective, uses of AI and
robots in HE are past, present and future.

29
Although hard to fully grasp, it is important that a wider understanding and debate is
achieved, because AI and robotics pose a range of pedagogic, practical, ethical and social
justice challenges. A large body of educational literature explores the challenges of
implementing new technologies in the classroom as a change management issue (e.g. as
synthesised by Reid, 2014). Introducing AI and robots will not be a smooth process
without its challenges and ironies. There is also a strong tradition in the educational
literature of critical responses to technology in HE. These typically focus on issues such
as the potential of technology to dehumanise the learning experience. They are often
driven by fear of commercialisation or neo-liberal ideologies wrapped up in technology.
Similar arguments are developing around AI and robotics. There is a particularly strong
concentration of critique around the datafication of HE. Thus the questions around the use
of AI and robots are as much about what we should do as what is possible
(Selwyn, 2019a). Yet according to a recent literature review most current research about
AI in learning is from computer science and seems to neglect both pedagogy and ethics
(Zawacki-Richter et al., 2019). Research on AIEd has also been recognised to have a
WEIRD (western, educated, industrialized, rich and democratic) bias for some time
(Blanchard, 2015).
One device to make the use of AI and robots more graspable is fiction, with its ability to
help us imagine alternative worlds. Science fiction has already had a powerful influence
on creating collective imaginaries of technology and so in shaping the future (Dourish
and Bell, 2014). Science fiction has had a fascination with AI and robots, presumably
because they enhance or replace defining human attributes: the mind and the body. To
harness the power of fiction for the critical imagination, a growing body of work within
Human Computer Interaction (HCI) studies adopts the use of speculative or critical
narratives to destabilise assumptions through “design fictions” (Blythe 2017): “a
conflation of design, science fact, and science fiction” (Bleecker, 2009: 6). They can be
used to pose critical questions about the impact of technology on society and to actively
engage wider publics in how technology is designed. This is a promising route for making
the impact of AI and robotics on HE easier to grasp. In this context, the purpose of this
paper is to describe the development of a collection of design fictions to widen the debate
about the potential impact of AI and robots on HE, based on a wide-ranging narrative
literature review. First, the paper will explain more fully the design fiction method.
This article provides an introduction to artificial intelligence, robotics, and research
streams that examine the economic and organizational consequences of these and related

30
technologies. We describe the nascent research on artificial intelligence and robotics in
the economics and management literature and summarize the dominant approaches taken
by scholars in this area. We discuss the implications of artificial intelligence, robotics,
and automation for organizational design and firm strategy, argue for greater engagement
with these topics by organizational and strategy researchers, and outline directions for
future research.
Research on robotics and artificial intelligence builds off of the substantial body of
literature surrounding innovation and technological development. Innovation is a key
factor in contributing to economic growth (Solow 1957; Romer 1990) and has been an
area of interest for both theorists and policymakers for decades. Literature on robotics and
automation has pointed to the impressive potential of these new technologies.
Brynjolfsson and McAfee (2017) claim that artificial intelligence has the potential to be
“the most important general-purpose technology of our era.” Graetz and Michaels (2018)
suggest that robotics added an estimated 0.37 percentage points to annual GDP growth for
a panel of 17 countries from 1993 and 2007, an effect similar to that of the adoption of
steam engines on economic growth during the industrial revolution.
Historically, excitement around radical new technologies is tempered by anxieties
regarding the potential for labor substitution (Mokyr et al. 2015). A body of work has
shown that automation spurred by innovation can both complement and substitute for
labor. Acemoglu and Restrepo (2018) examine how increased industrial robotics usage
has impacted regional US labor markets between 1990 and 2007. Their findings suggest
that the adoption of industrial robotics is negatively correlated with employment and
wages specifically that each additional robot reduced employment by six workers and that
one new robot across a thousand workers reduced wages by 0.5%. Graetz and Michaels
(2018) find that while wages increase with robot use, on average, hours worked drops for
low- and middle-skilled workers. A similar study in Germany suggests that each
additional industrial robot leads to a loss of two manufacturing job, but these jobs are
offset by newly created roles in the service industry (Dauth et al. 2017).
Existing work on artificial intelligence and robotics has also attempted to identify
“winners” and “losers” and to understand the distributional effects of these new
technologies. A body of this work looks at cross-industry effects. Autor and Salomons
(2018) show that industry-specific productivity increases are associated with a decrease
of employment within the affected industry; however, positive spillovers in other sectors
more than offset the negative own-industry effect. Similarly, Mandel (2017) examines

31
brick-and-mortar retail stores during the rise of e-commerce and finds that new jobs
created at fulfillment and call centers more than make up for job losses at department
stores.
Other work looks at how skill composition can affect the potential complementary or
substitution effects of these new technologies. A recent working paper by Choudhury et
al. (2018) looks at performance effects of the use of artificial intelligence by workers with
different types of training. They find productivity with artificial intelligence technology is
highly affected by an individual’s background with computer science and engineering.
Individuals who have requisite computer science or engineering skills are better able to
unlock superior performance using artificial intelligence technologies than individuals
without those skills. Felten et al. (2018) use an abilities-based approach to assess the link
between recent advances in artificial intelligence and employment and wage growth.
They find that occupations that require a relatively high proportion of software skills see
growth in employment when affected by artificial intelligence, while other occupations do
not see a meaningful relationship between the impact of artificial intelligence and
employment growth.
There is a growing literature in economics, strategy, and information systems that studies
the use of machine learning algorithms in decision-making. Some of the authors in this
literature use disaggregated, micro-level data to draw insights as to how artificial
intelligence affects firms or individuals differently depending on their background. Some
of this work examines whether and how the use of artificial intelligence and machine
learning tools affects individual biases. For example, machine-based algorithms appear to
outperform judges in making decisions regarding potential detainment pre-trial and also
reduce inequities (Kleinberg et al. 2018). Hoffman et al. (2017) find that managers who
choose to hire against recommendations constructed by machine-based algorithms choose
worse hires. Together, these results appear to suggest that machine learning algorithms
may have potential in improving decision quality and equity.
However, other research cautions that machine learning algorithms often contain their
own form of bias. For example, a machine learning algorithm designed to deliver
advertisements for Science, Technology, Engineering, and Math occupations targeted
men more than women, despite the fact that the advertisement was explicitly intended to
be gender-neutral (Lambrecht and Tucker 2018); Google’s Ad Settings machine learning
algorithm displays fewer advertisements for high-paying jobs to females than to males
(Datta et al. 2015); and artificial intelligence-based tools used in judicial decision-making

32
appear to display racial biases (Angwin et al. 2016). While these biases are troubling,
some argue that compared to the counterfactual of human decision-making, algorithmic
processes offer improvements in quality and fairness, and in particular, machine learning
tools are best able to mitigate biases when human decision-makers exhibit bias and high
levels of inconsistency (Cowgill 2019).
Recommender systems are a common tool on e-commerce platforms and frequently
incorporate machine learning or artificial intelligence algorithms in the creation of their
recommendations (Adomavicius and Tuzhilin 2005). Barach et al. (2018b) show that the
use of recommendation systems for sellers can substitute for explicit monetary incentives
in online marketplaces, highlighting one method by which firms can use artificial
intelligence technologies to cut costs. Barach et al. (2018a, 2018b) study recommendation
systems in online labor marketplaces and find that firms use AI-driven recommendations
to identify an initial set of generally acceptable partners before relying on internal
capabilities to select the best match. In particular, the use of the recommendation system
is used less for specialized jobs and for experienced employees.
In addition to the above areas, research on artificial intelligence and robotics has started
to examine a broader range of questions, such as how artificial intelligence may help
stimulate innovation (Cockburn et al. 2018), the role of policy in an economy featuring
artificial intelligence (Goolsbee 2018), and the role of artificial intelligence in
international trade (Brynjolfsson et al. 2018a; Goldfarb and Trefler 2018). There are other
important firm strategy and policy questions left to answer in this space such as the
impact of artificial intelligence on firm structure, the factors that lead to increased
adoption of these technologies, and distributional effects of artificial intelligence across
industries, geographies, and occupations. However, aside from literature studying
machine learning algorithms, research in this area has been slowed by a lack of available
data, especially at the firm level. We discuss future directions of research below.
While there are some data sets containing information on the diffusion of robotics, it is
largely at an aggregate level which does not allow for detailed microanalysis and
differences across industries and regions can be obscured. There are currently no public
data sets on the utilization or adoption of artificial intelligence at either the micro or the
macro level, as the most complete sources of information are proprietary and inaccessible
to the general public and the academic community (Raj and
Seamans 2018; McElheran 2019). Despite these limitations, scholars studying
management and organizations have constructed data sets and conducted research using

33
trade magazines and other industry-specific resources. For example, using the industrial
robotics industry as a setting, scholars have established that prior technological
experience and technological knowledge are associated with greater innovative behavior
following the introduction of a disruptive technology (Roy and Sarkar 2016; Roy and
Islam 2017). Researchers have also used the industrial robotics industry as a setting to
study organizational search and identify two distinct dimensions of search—search scope
and search depth (Katila and Ahuja 2002). Nevertheless, the next stage in the evolution of
research in this area should involve a proliferation of data to conduct a more focused and
rigorous analysis of important questions regarding these technologies, firm adoption, and
its consequences in an empirical manner.
Historically, advances in technology have reshaped the workforce and our work habits
and required organizations to adjust their design paradigms in dramatic ways. For
example, in the last two decades, the rise of the Internet has led firms to increasingly
embrace remote work and virtual teams which can cross geographic boundaries and use
virtual means to coordinate actions (Kirkman and Mathieu 2005). A significant challenge
for firms lies in recognizing when this reorganization is beneficial and what are the
boundaries to adjusting to the new technology. Kirkman and Mathieu (2005) note the
importance of weighing the “presses” that operate on real-world teams that influence the
effectiveness of face-to-face interaction compared to virtual interactions.
Similarly, artificial intelligence and robotics technology have the capacity to reshape
firms and change the structure of organizations dramatically. As discussed above, the
adoption of artificial intelligence and robotics technologies will likely alter the bundle of
skills and tasks that many occupations are comprised of. By that aspect alone, these
technologies will reshape organizations and force firms to restructure themselves to
account for these changes. Boundaries between occupations within firms are likely to
shift as some tasks are automated, and individuals within firms that choose to adopt these
technologies are likely to have greater exposure to computer technologies. In addition, the
composition of the labor force may change to adopt to the new set of skills that are most
valued. These changes are also likely to be reflected in the design of organizations as they
seek configurations to get the most value out of their human capital.
Interfirm boundaries are also likely to shift as robotics and artificial intelligence
technologies are adopted more widely. In a seminal article, Coase (1937) argues that
firms will expand until the cost of organizing an additional transaction within the firm
equals the cost of carrying out the same transaction on the market. Increased usage of

34
artificial intelligence and robotics technology has the potential to greatly reduce costs
within firms, potentially leading to fewer transactions on the market. Tasks that
previously had to be contracted to other firms may now be able to be transferred in-house,
or alternatively, firms may find that tasks that were done within the firm can be more
efficiently done by other organizations with greater access and facility with these
technologies. In addition, a firm may avoid adopting newer technologies such as robotics
if the technology is highly specific to the firm and the firm faces risk of hold-up from an
opportunistic downstream customer (Williamson 1985).
Regardless of what form the effect takes, the strategy literature consistently presents
evidence that incumbent firms struggle during technological discontinuities (e.g.,
Tushman and Anderson 1986; Henderson and Clark 1990). Despite the challenges
presented by radical innovation, incumbents can be successful when they are “pre-
adapted,” and their historical capabilities and assets can be leveraged to take advantage of
the new technology (Klepper 2002; Cattani 2006). In the specific context of robotics
technology, Roy and Sarkar (2016) present evidence that the presence of in-house users
of robots and access to scientific knowledge will best prepare firms to be flexible and
adapt to new, “smarter” robotics technology. To the extent that this finding is
generalizable, firms may consider employing individuals with experience with these
technologies and increase their facility with scientific knowledge in the area to best be
able to take advantage of potential benefits from adoption.
Increasingly, work on automation considers or focuses on artificial intelligence rather
than just robotics. Frey and Osborne (2017) predict how increased computerization, in
particular, machine learning technologies, will affect non-routine tasks. Based on the
tasks most involved in an occupation, the authors propose which occupations may be
more or less at risk of automation in the future. Their results suggest that 47% of
employment in the USA is at high risk of computerization. Frey and Osborne’s work has
been applied by researchers in other countries. Using the same methodology, Brzeski and
Burk (2015) suggest that 59% of the German workforce may be highly susceptible to
automation, while Pajarinen and Rouvinen (2014) suggest that 35% of Finnish jobs are at
high risk. Similar to the task-based approach utilized by Frey and Osborne, Brynjolfsson
et al. (2018b) take a task-based approach to assess occupations’ suitability for machine
learning. They show that occupations across the wage and wage bill spectrum are equally
susceptible, suggesting that machine learning will likely affect different parts of the
workforce than earlier waves of automation.

35
Work on automation and labor has focused on different units of analysis. Much of the
existing work in economics has focused on the economy as a whole. For example, Frey
and Osborne (2017) measure the risk of automation on an occupation by occupation level
but consider the occupations at a global level. Similar work by McKinsey Global Institute
(MGI 2017) does the same, and recent work by Accenture considers these at the country
level (Accenture 2018). US-specific work has been done by Brynjolfsson et al. (2018b)
and Felten et al. (2018). Some research has taken a more focused approach and highlights
the effect of artificial intelligence and automation on specific sectors of the economy. For
example, Acemoglu and Restrepo (2018) highlight that the largest effects of technology
adoption will occur in manufacturing, especially among manual and blue-collar
occupations and for workers without a college degree.
The review's distributions were picked in two stages (Table A1). The primary step was to
do a look of the Net of Science database. To begin with (Arrange 1.1), we looked for fake
intelligence, machine learning, profound learning, neural systems, and mechanical
autonomy within the title, theoretical, or catchphrases (TS = Subject) (580,671 articles).
In Organize 1.2, we centered on papers distributed in best showcasing and benefit diaries
(SO = Distribution title) that were more likely to specify AI and robots in esteem co-
creation: Journal of Marketing, Journal of Marketing Research, Journal of Consumer
Research, Marketing Science, Journal of the Academy of Marketing Science, Journal of
Retailing, Journal of Business Research, Marketing Letters, International Journal of
Research in Marketing, Journal of Product Innovation Management, Journal of Service
Research, Journal of Services Marketing, Service Industries Journal, Journal of Service
Management (formerly International Journal of Service Industry Management), Journal of
Service Theory and Practice (formerly Managing Service Quality), and Service Science
(altogether 31,019 articles). In Stage 1.3, we combined a search of the above-mentioned
articles with a search of these five keywords in the title, abstract, or keywords: artificial
intelligence, machine learning, deep learning, neural network, and robot. The quest
yielded 61 articles with the five keywords in the title, abstract, or keywords published in
the top marketing and service journals. The search was performed from the beginning of
the Web of Science until the end of May 2018.The suitability of the papers for the review
was evaluated in Stage 2. If the title and abstract did not indicate the substance of the
article, the whole paper was read to see if it was suitable for this research.Two exclusion
criteria were used. We excluded studies in Stage 2.1 that listed our search terms in the
abstract or keywords but did not address them in the full text (Exclusion criterion 1). We

36
excluded studies that used AI to collect or evaluate data in Stage 2.2, but we didn't
address the utility of the AI-based approach for co-creating value (Exclusion criterion
2).Ultimately, we selected 32 articles for our final analysis.Then we looked at the 32
papers that were chosen. Documenting, acquiring a basic understanding, coding, and
categorization were the four stages of the study. First, using Microsoft Excel, the specifics
of the papers were registered, including the year of publication and the journal
name.Second, the selected papers were read to obtain a deeper understanding of the
research area and how the experiments have evolved over time. Third, material related to
AI or robots in value co-creation was annotated and coded for its message or content once
it was discovered.We used inductive content analysis, which can be used to analyze the
symbolic content of written correspondence in a systematic way (Helkkula, 2011; Kolbe
& Burnett, 1991).
Modern AI more precisely, "narrow AI," which performs objective functions using data-
trained models and frequently falls into the categories of deep learning or machine
learning has already impacted nearly every major sector. That has been especially true in
recent years, as data gathering and analysis have increased dramatically owing to
improved IoT connection, the proliferation of linked devices, and faster computer
processing.
Some industries are only getting started using AI, while others are seasoned veterans.
Both have a lot of work ahead of them. Regardless, the influence of artificial intelligence
on our daily lives is difficult to ignore:
• Manufacturing:
• Healthcare
• Agriculture
• Transportation
• Education
• Media
• Customer Service
But those advancements (and a slew of others, including this current crop) are just the
beginning; there's a lot more to come far more than even the most foresighted
prognosticators can imagine.
“I think anyone making assumptions about intelligent software's capabilities peeking out
at some point is mistaken,” says David Vandegrift, CTO and co-founder of 4Degrees, a
customer relationship management business.”

37
Various specialized propels are taking put in our culture. It'll be exceptionally distinctive
from what it is today in fair some decades. The quick development of the manufactured
insights and mechanical autonomy businesses [6] is one important element that impacts
and changes various regions of standard of living. Let's look at the effect of robots and
counterfeit insights on our lives from an assortment of points.

Will AI be a Job Killer?

Will AI be a Job Creator?

Replacing redundant Job

Figure 2: Questioning about automation in Workplace

Tomation and the work of generation robots, especially within the mechanical segments
of Western high-labor-cost countries, result in noteworthy work and item taken a toll
diminishments. Whereas the German vehicle division spends more than €40 per hour on
fabricating, employing a robot costs between €5 and €8 per hour [7]. As a result, a
fabricating robot is less costly than a Chinese laborer [8]. Another point to consider is that
a robot cannot gotten to be debilitated, have children, or go on strike, and has no right to
annually take off. An autonomous computer framework is free of outside factors, which
suggests it can work dependably and persistently, 24 hours a day, seven days a week, and
in perilous situations [9].Its exactness is regularly higher than that of an individual, and it
is unaffected by weakness or other natural variables. [10]. Work can be more
institutionalized and facilitated, coming about in expanded efficiency, made strides
execution control, and more noteworthy straightforwardness within the commerce [11].
Independent frameworks can be driven by objective criteria amid the decision-making
handle, permitting judgments to be made without feeling and on the premise of
actualities. Picks up in efficiency have continuously come about in superior living
conditions for everybody. It's the same with intelligent calculations.
Employees' most prominent concern is the misfortune of numerous occupations and, as a
result, unemployment. And this dread is well-founded. The selection of machine learning
38
calculations and other sorts of robots benefits trade proprietors and producers to a great
extent in terms of upgraded efficiency. That's why, habitually to the drawback of
representatives, they are fast to actualize unused innovation.
Rising technologies' major objective is to form all processes safer and more productive,
instead of to supplant people in their callings. It's not a battle, but or maybe a win-win
organization between individuals and computerized robots. Modern sorts of occupations
may rise, and modern aptitudes will be required, hence business in computerized
businesses may not lessen, but or maybe extend. It's genuine since machines can't work
on their claim. People are required to make machine program, keep up and repair
apparatus, and make choices based on information given by intelligent technology. Work
parts such as mechanical technology directors or AI instructors may gotten to be
accessible within the future.
People will have to adjust to the changing reality by updating their abilities and obtaining
extra information in order to make this achievable.
Industry 4.0 has here, and it is upsetting the industry. Alter is required when the world
anticipates more mass-produced things that are moreover of tall quality, dependable, and
customizable. The following stage in robotization and computerization, Industry 4.0, will
usher in significantly more modern strategies of doing trade, especially in fabricating and
generation [12].
Around half a century prior, the primary fundamental robots strolled onto the fabricating
floor. Nowadays, a fabricating firm without robotized lines, steel mechanical arms, and
CNC mechanical apparatus is troublesome to understand. Robots have become the
modern typical within the mechanical commerce, and they are making huge advance in
upgrading distinctive generation forms. Equipment integration with savvy advances such
as IoT arrangements, fake insights and machine learning calculations, Huge Information,
and cloud computing are all current robots in fabricating advancements. As a result, keen
manufacturing is progressing.These robotic systems are capable of more than just simple
repetitive tasks like loading, assembling, and altering parts. They can also carry out
cognitive activities like as making quick judgments and improving procedures without the
need for human interaction.
Optimized Productivity: Mechanical apparatus may be modified to run at a consistent,
ideal speed with no stops. As a result, computerized machines are able to make more in
less time than human representatives. Farther administration permits for fast setup and

39
issue determination. Besides, the computerized gear is greatly versatile and can move
between obligations with ease, coming about in expanded generation proficiency.
Improved Quality: Intelligent machines reduce human error and can deliver near-perfect
precision, resulting in higher output quality. As a result, consumer satisfaction improves
since faulty items are less likely to reach end customers.
Reduced Costs: Manufacturing plant proprietors spare cash on work pay as before long
as one robot can supplant handfuls of individuals. The beginning uses are moderated by a
fast return on speculation (ROI) that can be figured it out in as small as two a long time.
Typically made attainable by more prominent efficiency and throughput speed. Robots
can work in low-light situations and don't require temperature control, which decreases
utility costs.
Safety: Numerous fabricating occupations incorporate more physical hazard and work in
perilous situations. In unsafe circumstances, savvy robots can take the part of individuals,
diminishing word related wounds and their negative affect on employees' wellbeing.
High-risk segments, such as mining or compost fabricating, depend on robots to
anticipate mishaps and guarantee laborer safety. The fabricating industry's computerized
change could be a sort of mechanical transformation, and like each transformation, it has
both benefits and disadvantages. The taking after are a few of the disadvantages of
robotization:
 High initial expenses
 The necessity to reshape the labor market
 An additional burden on the educational system and social organizations
 Changes to the corporate culture at all stages.
Manufacturers must, without a doubt, embrace robots and automation in order to remain
competitive.
Advanced innovation, such as mechanical autonomy and manufactured insights (AI),
offer assistance to progress advanced wellbeing and progress therapeutic treatment.
Surgeons' aides have gotten to be fundamental much appreciated to automated gadgets.
They give for least invasiveness and expanded exactness amid strategies, diminishing
understanding recuperation time. The department of telemedicine is made less demanding
by AI-based Chabot’s and interview apparatuses. Other cleverly frameworks can make
correct analyze based on a patient's restorative records and other information, such as
information from restorative wearable’s. Because of their capacity to make medical

40
services more precise and available, intelligent software and machines have great promise
in healthcare.
Farming is one of the zones that has been intensely affected by the utilization of robots
and counterfeit insights, which may come as a surprise. Agricultural robots are more
proficient than human workers in exercises like gathering and showering crops for weeds
and bugs. These are gadgets and rambles that utilize computer vision, machine learning
models, or manufactured insights calculations to screen trim and soil conditions, examine
the effect of climate, planting, trim splashing, wellbeing appraisals, and other natural
variables on plants, and figure the results. Within the farming industry, there are a few
unmistakable sorts of AI Robots, as outlined in Figure 3.

Figure. 3. Types of agricultural drones.

Transportation is a zone where AI and machine learning are driving to noteworthy


headways. Concurring to Brookings Institution analysts Cameron Kerry and Jack Karsten,
around $80 billion was went through in independent car advances between Eminent 2014
and June 2017. These speculations cover both independent driving applications and the
fundamental advances that are basic to the industry [13].
Autonomous vehicles, such as automobiles, trucks, buses, and ramble conveyance
frameworks, make utilize of cutting-edge innovation. Robotized vehicle direction and
braking, lane-changing frameworks, the utilize of cameras and sensors for collision
evasion, the utilize of AI to examine information in genuine time, and the utilize of high-
performance computing and profound learning frameworks to adjust to modern
circumstances through nitty gritty maps are a few of these highlights [14].

41
Route and collision shirking depend intensely on light location and extending frameworks
(LIDARs) and fake insights (AI). Light and radar sensors are combined in LIDAR
frameworks. They are put on the tops of cars that utilize 360-degree symbolism from a
radar and light pillars to appraise the speed and remove of adjacent objects. These gages,
in conjunction with sensors on the front, sides, and raise of the vehicle, deliver
information that keeps up fast-moving automobiles and trucks in their claim path, helps
them in dodging other vehicles, applies brakes and controlling as required, and does it
instantly to maintain a strategic distance from mishaps.
Independent vehicles require high-performance computers, complex calculations, and
profound learning frameworks to adjust to unused circumstances since these cameras and
sensors collect a huge sum of information and must dissect it rapidly in arrange to dodge
the car within the another path. This demonstrates that computer program, not the car or
truck itself, is the key [15]. As climate, driving, or street conditions alter, advanced
software permits automobiles to memorize from the encounters of other vehicles on the
street and adjust their direction frameworks appropriately [16].
Autonomous cars are attracting the attention of ride-sharing businesses. In terms of
customer service and labor productivity, they see benefits. Driverless cars are being
investigated by all of the main ride-sharing firms. The popularity of car-sharing and taxi
services, such as Uber and Lyft in the United States, Daimler's Mytaxi and Hailo in the
United Kingdom, and DidiChuxing in China, demonstrates the benefits of this mode of
transportation. Uber has secured a deal with Volvo to buy 24,000 self-driving cars for its
ride-sharing service [17].
In any case, in March 2018, one of the ride-sharing company's independent vehicles in
Arizona collided with and murdered a individual. Uber and a number of automakers
ended testing instantly and started examinations into what went off-base and how the
catastrophe happened [18]. Industry and customers alike need certainty that the
innovation is secure and competent of conveying on its guarantees. This fiasco might
smother AI advance within the transportation industry unless persuading clarifications are
given.
AI is being utilized by city governments to upgrade benefit conveyance. The Cincinnati
Fire Office, for illustration, is using information analytics to upgrade therapeutic crisis
reactions, agreeing to Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson. The
modern analytics framework prompts the dispatcher on how to reply to a therapeutic
crisis call, counting whether the persistent can be treated on-site or should be transported

42
to the healing center, based on a assortment of criteria counting the kind of call, area,
climate, and comparable calls [19].
Cincinnati authorities are utilizing this innovation to rank answers and discover the most
excellent strategies to oversee emergencies since it gets 80,000 demands each year. They
see AI as a implies to bargain with gigantic sums of information and figure out how to
respond to open demands within the most viable way conceivable. Or maybe of
responding to issues as they emerge, specialists are endeavoring to be proactive in their
approach to giving urban administrations.
Cincinnati isn't alone in this respect. Savvy city applications that use AI to move forward
benefit conveyance, natural arranging, asset administration, vitality utilization, and
wrongdoing avoidance, among other things, are being received by a assortment of urban
locales. Quick Company evaluated American cities for its shrewd cities list and
recognized Seattle, Boston, San Francisco, Washington, D.C., and Modern York City to
be the beat adopters. For illustration, Seattle has grasped supportability and is using
manufactured insights to control vitality utilize and asset administration. Boston has
created a "City Corridor To Go" program to guarantee that underprivileged ranges get the
administrations they require.It has moreover introduced “traffic cameras and inductive
circles, as well as acoustic sensors to distinguish gunshots.” 203 buildings in San
Francisco have been perceived as satisfying LEED supportability criteria [20].
Metropolitan locales are driving the nation in AI arrangement through these and other
strategies. Concurring to a think about from the National Association of Communities,
66% of American cities are contributing in keen city advances. “Smart meters for utilities,
cleverly activity lights, e-governance apps, Wi-Fi stands, and radio recurrence
recognizable proof sensors in pavement,” concurring to the investigate. [21]

43
CHAPTER3
METHODOLOGY
AI has long been caught on to progress human potential as well as efficiency, as seen by
the quick development in speculation by numerous firms and associations. Healthcare,
fabricating, transportation, vitality, keeping money, monetary administrations,
administration counseling, government organization, and marketing/advertising are
among these businesses. The worldwide AI advertise income was almost 260 billion
dollars in 2016, and it is anticipated to outperform $3,060 billion by 2024
[22].Exoskeletons, rehabilitation, surgical robots, and individual care robots have all
profited from this. The following ten years' financial impact is anticipated to be between
$1.49 and $2.95 trillion. These projections are based on benchmarks that take
comparative specialized progresses like web, portable phones, and mechanical robots into
thought [23]. The private division and wander capital speculation may be a gage of the
fundamental technology's commercial potential. a third of software program and facts
generation stocks were invested in AI in 2016, even as 1.sixteen billion US dollars had
been invested in begin-up corporations globally in 2015, a 10-fold growth considering the
fact that 2009. Speech popularity, natural language processing, and computer imaginative
and prescient are all regions in which major era organizations are investing. Deep
learning utilised better hardware and sensor technologies to train artificial networks with
considerable volumes of data created from "huge statistics" [26, 27], ensuing in a prime
jump in the overall performance of gadget studying algorithms. Current state-of-the-art
AI empowers the computerization of a assortment of exercises, and modern applications
are on the skyline that have the potential to convert the way businesses work. As a result,
there's a parcel of room for financial advancement, as seen by the reality that in 2014 and
2015, Google, Microsoft, Apple, Amazon, IBM, Yahoo, Facebook, and Twitter bought at
slightest 26 AI start-ups and undertakings for a add up to of $5 billion.
In 2014, Google paid more than $500 million for Deep Mind, a London-based start-up
practicing in profound learning, setting an unused record for corporate speculation in AI
investigate to scholastic benchmarks. Since 2012, DeepMind has distributed over 140
distributing and conferences commitments, counting four distributions in Nature.
DeepMind's achievements incorporate concocting AI innovation that permits for the
creation of general-purpose program operators that alter their conduct based as it were on
a total remunerate. In numerous ways, this fortification learning method beats human

44
execution, as prove by the vanquish of the world Go winner, a watershed minute in AI
improvement.

Figure 4: A conservative estimate of venture capital investment in AI

Watson could be a supercomputer stage built by IBM that can do content mining and
extricate modern analytics from gigantic sums of unstructured information. In 2011, IBM
Watson vanquished two of the finest players on ‘Jeopardy!,' a celebrated test appear in
which contenders must figure questions based on indicated answers. In spite of the fact
that computer frameworks can effectively get data, understanding normal dialect remains
a trouble. This development has had a significant impact on online look execution and AI
systems' by and large capacity to associate with individuals. In 2015, IBM obtained
Alchemy API in arrange to coordinated its content and picture examination capabilities
into IBM Watson's cognitive computing stage. The innovation has as of now been put to
work handling legitimate papers and helping with legitimate duties. These abilities,
concurring to specialists, have the potential to upset show health-care frameworks and
therapeutic investigate.
The creation of frameworks that can dependably connected with individuals is the center
of driving AI businesses' inquire about. Real-time voice acknowledgment and
interpretation capabilities permit for more normal interaction. Robo-advisory applications
are at the bleeding edge of the AI segment, with a around the world esteem of 255 billion
dollars anticipated by 2020 [23]. Virtual collaborators are as of now accessible from a
number of enormous organizations.

45
Apple has Siri and Amazon Alexa, Microsoft has Cortana, and Google has the Google
Collaborator, for case. Emotient Inc., a start-up that employments fake insights to get it
people's feelings by examining facial expressions, was bought by Apple Inc. in
2016.WaveNet could be a generative demonstrate built by DeepMind that imitates human
discourse. This sounds more characteristic than the finest accessible Text-to-Speech
technologies, according to the company's site. Facebook is additionally assessing
machine-human interaction as a prerequisite for summed up AI.

Figure 5: Predicted economic effect of AI worldwide estimated based on the GDP of mature economies and benchmark
data from broadband Internet economic growth [24, 25].

A vital approach to dodge the perils of hoarding capable AI as of late included


subsidizing for OpenAI, a non-profit association. To attain state-of-the-art execution,
OpenAI has overhauled developmental calculations that can work in couple with
profound neural systems. It is seen as a competitor to DeepMind since it gives open-
source machine learning libraries that are comparable to TensorFlow, a profound learning
system released by Google DeepMind. In any case, the most qualification between
OpenAI's innovation and those of other private tech firms is that the delivered Mental
Property is open to everyone. Moreover, there's a warmed dialog over whether or not we
are within the middle of an AI bubble, which joins the conundrum that, in spite of an blast
of technical headway and development, efficiency development within the Joined
together States has slowed over the past decade. It's incomprehensible to tell in the event
that usually due to a measurable imperfection or since current disclosures aren't

46
adequately transformational. This decrease can moreover be ascribed to the need of
steady arrangement systems and security guidelines that would permit AI to be utilized in
large-scale ventures.
Table 1: Major companies in AI

47
Table 1: Major companies in AI

CHAPTER 4
RESULTS AND DISCUSSION
4.1: How can AI be dangerous
Most scientists think that a highly intelligent AI is unlikely to experience human emotions
like love or hatred, and that there is no reason to believe that AI will become purposefully
good or bad. Instead, scientists believe that two scenarios are most plausible when it
comes to AI becoming a risk
4.2 The AI has been trained to perform a heinous act
Artificial intelligence systems that are trained to kill are known as autonomous weapons.
These weapons may potentially result in huge fatalities if they fall into the wrong hands.
Furthermore, an AI arms race might unintentionally lead to an AI war with huge deaths.
To prevent being stopped by the adversary, these weapons would be engineered to be
exceedingly difficult to simply "switch off," allowing humans to lose control in a case
like this. This risk exists even with narrow AI, but it becomes more prevalent as AI
intelligence and autonomy develop.
When we fail to properly match the AI's aims with ours, which is exceedingly difficult,
the AI finds a harmful way for attaining its purpose. If you order an obedient intelligent
automobile to drive you to the airport as quickly as possible, it may get you there pursued
by helicopters and covered in vomit, doing exactly what you asked for. If a super
intelligent system is charged with a large-scale geoengineering project, it may cause
chaos in our environment as a side consequence, and regard human attempts to halt it as a
danger that must be met.
As these cases appear, the most stress with respect to advanced AI is competency instead
of noxiousness. A super-intelligent AI will exceed expectations at accomplishing its
destinations, and on the off chance that those goals aren't the same as our own, we'll have
a issue. You're likely not a terrible ant-hater who intentionally treads on ants, but in case
you're in charge of a hydroelectric green vitality extend and an anthill is within the area
that will be overwhelmed, the ants will endure. One of the most objectives of AI security
inquire about is to never put people within the same circumstance as those ants.

48
4.3: Why the recent interest in AI safety??
Many top AI experts have joined Stephen Hawking, Elon Musk, Steve Wozniak, Bill
Gates, and other significant names in science and technology in expressing worry about
the hazards presented by AI in the media and via open letters. Why is this topic now in
the news?
The concept that the search for solid AI will inevitably succeed was once in the past
believed to be science fiction, decades or maybe centuries within the future. Be that as it
may, owing to later revelations, a few AI breakthroughs that were once thought to be
decades absent have as of now been accomplished, inciting numerous researchers to
consider the prospect of super intelligence in our lifetime. Whereas a few specialists
accept human-level AI would take centuries, the larger part of AI analysts at the 2015
Puerto Rico Conference anticipated it would happen by 2060. Because the required
security ponders might take decades to total, it is sensible to start promptly.
We have no way of knowing how AI will act since it has the potential to ended up more
shrewdly than any human. We can't take past specialized headways as a beginning point
since we've never created something that can outsmart us, either intentioned or
inadvertently. Our claim improvement may be the finest outline of what we may confront.
Individuals nowadays run the show the world, not since they are the most grounded,
fastest, or biggest, but since they are the foremost brilliantly. Will we be able to preserve
control in the event that we are now not the most intelligent?
FLI believes that our civilization will thrive as long as we win the contest between
technology's increasing strength and our ability to handle it wisely. In the case of AI
technology, FLI believes that the best approach to succeed is to encourage AI safety
research rather than stifle it. Specialists have encouraged for significant inquire about on
the impact of AI on our society, not as it were in mechanical but too lawful, moral, and
temperate divisions, since to the exponential development in intrigued in AI. This
response moreover incorporates the plausibility that self-driving super fake insights may
one day outperform human cognitive aptitudes. In AI circles, this future plausibility is
alluded to as the "AI peculiarity" [28]. This can be as a rule characterized as a machine's
capacity to build superior machines on its claim. Numerous masters have addressed this
planned situation and have communicated their doubt. Today's AI analysts are
concentrating their endeavors on making frameworks that exceed expectations at a
constrained set of exercises. This accentuation contrasts with the objective of creating a

49
super nonexclusive AI framework competent of mimicking all of the cognitive capacities
related with human insights, counting as self-awareness and passionate information. Extra
societal issues have been highlighted in expansion to the wrangle about around AI
improvement and human prevalence as the world's most brilliantly species. For case, the
AI100 (One Hundred Year Consider on Counterfeit Insights), a Stanford University-led
gather, recognized 18 key AI subjects [29].In spite of the fact that not one or the other
total nor conclusive, they do layout the breadth of issues that have to be be investigated in
arrange to get it the conceivable impact of AI and underline that there are a number of
issues that must be tended to. Numerous more thinks about have been conducted, and
they all raise comparable stresses almost the far reaching sending of AI innovation.

4.4: Topics Covered by AI100


4.4.1 Technical patterns and surprises:
This section tries to anticipate potential AI technology advancements and competences in
the coming years. AI pattern and effect monitors should be established, assisting in the
planning of AI implementation in specific areas and the preparation of appropriate
regulations to ensure its smooth implementation.
4.4.2 Key AI opportunities:
How developments in AI may assist to upgrade the effectiveness of society areas such as
healthcare, schools, administration, and governance, taking into account not just the
economic but also the social benefits and effect.
4.4.3 Delays in converting AI breakthroughs into real-world values
include the following:
The speed with which AI is being translated into real-world applications is presently
being pushed by prospective economic benefits [30]. Even if their economic exploitation
is not yet guaranteed, it is important to take steps to encourage the sudden translation of
those potential AI applications that can positively impact or solve a critical need in our
society, such as those that can save lives or greatly improve the organization of social
services.
4.4.4 Privacy and machine intelligence:
Personal data and privacy are key concerns, and it's critical to examine and plan for the
regulatory, legal, and policy frameworks that will govern the sharing of personal data in
the development of AI systems. Democracy and freedom: In addition to privacy, ethical

50
concerns about the covert use of AI for unethical purposes must be addressed. AI should
not be used at the price of restricting or affecting people's democracy and freedom.
Law: This takes into account the ramifications of applicable laws and regulations. First,
determine which parts of AI deserve legal review and what steps should be done to
guarantee that AI services are lawfully enforced. It should also provide frameworks and
instructions for adhering to the laws and regulations that have been established.
Ethics: By the period AI is implemented in practical applications, ethical issues about
how it interacts with the outside world have arisen. What kinds of AI applications are
unethical? What method should be used to make this information public?
Economics: The economic consequences of AI on jobs should be tracked and
anticipated so that regulations may be put in place to steer our future generations into jobs
that will not be quickly automated. The use of sophisticated AI in financial markets has
the potential to generate volatility, thus it's important to analyse the impact AI systems
might have on financial markets.
AI & warfare: For more than a decade, AI has been used in military applications. For
military objectives, robot snipers and turrets have been created [31]. As intelligent
weapons become more autonomous, new conventions and international agreements are
needed to specify a set of secure boundaries for the use of AI in armament and combat.
Criminal applications of AI:As AI is increasingly integrated into malware, the
risks of obtaining personal information from infected machines are increasing. Computer
viruses and worms may employ very complex AI algorithms to evade detection, making
malware more difficult to detect [32-33]. Another example is the usage of drones and the
possibility for them to fall into the hands of terrorists, with disastrous consequences.
Collaboration with machines: Computers and machines must collaborate, and it is
important to consider which scenarios collaboration is necessary and how to do so
securely. Accidents involving robots working alongside humans have already occurred
[34], and robotic and autonomous system development should priorities not just increased
job accuracy but also the ability to comprehend the surroundings and human intent.
AI and human cognition: Artificial intelligence has the potential to improve human
cognitive skills. Sensory informatics and human computer interfaces are two study areas
that are important to this goal. They are also utilised in surgery [35] and air traffic control
[36] in addition to rehabilitation and assisted living. As cortical implants become more

51
common for controlling prosthetics, our thinking and thinking become more reliant on
machines, the health, safety, and ethical implications must be considered.
Safety and Autonomy: In order to ensure the safe functioning of intelligent,
autonomous systems, formal verification methods should be created. Validity can be
centered on the thinking process and validating if an intelligent system's knowledge base
is correct [37], as well as ensuring that the formulation of intelligent behavior is within
safe bounds [38].
Loss of control of AI systems: The possibility of AI becoming self-contained is a
serious issue. Studies on this topic should be encouraged, from both a technological
viewpoint and in terms of the appropriate framework for managing AI development
responsibly.
People's psychology and smart machines: More study should be done to gain a
more complete understanding of people's attitudes and concerns regarding the widespread
use of smart machines in society. Furthermore, knowing customers’ preferences is critical
for increasing the acceptance of intelligent systems [39-40].
Communication, comprehension, and outreach: To adopt AI technology in
our culture, communication and educational techniques must be created. These methods
must be written in a way that non-experts and the general public can comprehend and use.
Neuroscience and artificial intelligence: Neuroscience and artificial intelligence
may coexist. Neuroscience is critical for directing AI research, and new advancements in
high-performance computing have opened up new opportunity to examine the brain using
computer simulation tools to test novel ideas [41].
AI and philosophy of mind: When AI achieves a degree of consciousness and self-
awareness, it will be necessary to comprehend the inner world of machine psychology
and consciousness subjectivity.

4.5 Robotics and Ai


Mechanical technology is making more complex sensorimotor capabilities that permit
robots the capacity to adjust to their ever-changing environment, building on
improvements in mechatronics, electrical designing, and computing. Until as of late, the
mechanical generation framework was built around the machine, which was tuned to its
environment and permitted as it were minor changes. It may presently be more basically
joined into an existing setting. Seeing, arranging, and execution are the three viewpoints
of a robot's independence in a given setting (controlling, exploring, collaborating).The

52
elemental objective of combining AI with mechanical technology is to progress the sum
of independence of the robot through learning. This degree of insights is characterized as
the capacity to predict long run, whether in assignment arranging or in locks in (by
controlling or exploring) with the world. Numerous endeavors have been made to form
shrewdly robots. Robots that can execute particular independent exercises, such as
driving a vehicle [42], flying in characteristic and man-made settings [43], swimming
[44], transporting boxes and fabric in different landscapes [45], picking up things [46]
and putting them down [47], do exist nowadays.
The task of perception is another major utilize of AI in robots. Robots can see their
environment utilizing built-in sensors or computer vision. Computer frameworks have
improved the quality of both detecting and vision within the past decade. Recognition is
significant not fair for arranging, but too for giving the robot a wrong feeling of self-
awareness. This makes it conceivable for the robot to interact with other things within the
same environment. Social mechanical autonomy is the title given to this field. Human-
robot interfacing (HCI) and cognitive mechanical technology are two huge themes
secured.
The objective of HCI is to extend human mechanical recognition in zones such as
exercises [48], feelings [49], nonverbal communication [50], and being able to navigate
an environment with individuals [51]. The field of cognitive mechanical technology is
concerned with giving robots the capacity to memorize and obtain data on their possess
through progressed levels of discernment based on impersonation and encounter. Its
objective is to replicate the human cognitive framework, which controls the method of
learning and comprehending through involvement and sensorisation [52].There are
additional models in cognitive robotics that combine incentive and curiosity to enhance
the quality and speed with which information is acquired through learning [53-54].
AI has continuing to break milestones and overcome numerous obstacles that were
unimaginable only a decade ago. In many new fields, the combination of these
developments will continue to change our understanding of robotic intelligence. Figure 6
depicts a history of robotics and AI milestones.

53
robot is controlled by a surgeon from a
2016 Nanorobots: A team from master console.
2017 Go is solved: A team from Google
PolytechniqueofMontréal created a
DeepMindcreated an algorithm named
nanotransporter-bot that can
AlphaGo that beat topplayers of the ancient
administer drugs without damaging
far-eastern board game Go.
surroundingorgans and tissues.
2016 Microfluidic robot:The first
2014 Robot exoskeleton: A complete
autonomous,entirely soft robot powered by
paralysedman was able to walk again using a
a chemical reactionand a microfluidic logic
roboticexoskeleton designed by Ekso
was developed by a team
Bionics. 2010 iCub:A 1 meter high humanoid robot

2014 Pepper: Japanese company Softbank forresearch in human cognition at IIT, Italy.

presented the first robot, so-named Pepper, The robotcan express emotions and is

to beused for customer service. The robot equipped with tactilesensors to interact with

has integrated an emotion engine to interact the environment.


2010 Robotnaut 2: NASA revealed a
with people.
humanoidrobot with a wide range of
2010 3D Printing: First 3D printers were
sensors that can replacehuman astronauts.
madecommercially available. 2007 Checkers is solved: A program from
2010 IBM Watson: IBM's Watson University of Alberta named Chinook was
computer beathuman champions on the able tosolve the problem of checkers and
game show Jeopardy!byanalysing natural beat humans atseveral competitions
language and finding answers to questions 2005 Autonomous vehicle challenge:A

more rapidly and accurately than its human team from Stanford University won the

rivals. challenge organized by DARPA for driving


2005 Robot BigDog: Boston Dynamics autonomously offroad across a 175-mile
createdthe first robot that could carry 150 long desert terrain without
Kg ofequipment. The robot was able to human intervention.
traverse roughterrains using its four legs. 2002 Darpa’sCentibots: First collaborative
2004 Mars Robot: Robots landed on mars. robotswarm of mobile robots that could
Although they were only supposed to work survey anarea and build a map in real time
for 90 days, they extended their lifetime for without humanwithout human
several years and remain operative until supervision.
today. 2002 Roomba: The first household robot
2000 DaVinci Surgical System: A surgical for cleaning. It was able to detect and avoid
robot for minimally invasive (keyhole) obstacles as well as navigating within a
surgery was approved by the FDA. The house without using maps.
Figure. 6: A timeline of robotics and AI

54
4.6 Programming Language Of Ai
Logo Language Date Type Influenced AI resources
by

C++ 1983 Procedural C, Algol 68 1-Relatively quick execution times.


2- Some AI libraries that are
compatible, such as Alchemy for
Markov logic and Mlpack for generic
machine learning.
C# 2000 Multi-paradigm C++, Java, 1-Easy prototyping and a well-
(functional, Haskell thought-out setting.
procedural) 2-Most often used language for AI in
games since it works well with major
game engines like Unity.
Clojure 2007 Functional Lisp, Erlang, 1-Simple concept and cloud
Prolog architecture based on the JVM.
2-Libraries for the creation of
behaviour trees and rapid interactive
development (alter-ego)
Java 1995 Procedural C++, Ada 83 1- VM allows for easy maintenance,
Concurrent mobility, and openness.
2- A plethora of AI libraries and tools,
including Tweety and ML
(DeepLearning4, Weka, Mallet etc.)
Matlab 1993 Multi-paradigm APL 1- A well-integrated setting. Matrix is
a linear algebra-oriented programming
language.
2-A collection of machine learning,
statistics, and signal processing
toolboxes and tools
Python 1972 Procedural C++, java, 1-A very helpful typically associated
haskell, perl that gives the language a lot of
flexibility and versatility. Prioritize
fast development.
2-A wide range of platforms and
utilities for AI, machine learning, deep
learning, scientific computing, natural
language processing, and other topics.
Figure 6B: Programming Languages of AI with AI resources.

Since the late 1950s, programming dialects have played a noteworthy portion within the
development of AI, and various groups have worked on imperative AI investigate
ventures, such as mechanized show programs and amusement programs (Chess, Women)
[54]. Analysts found that one of the interesting needs for AI at this time is the capacity to
handle images and lists of images instead of numbers or strings of letters. Since the

55
dialects of the period needed such highlights, an MIT analyst named John MacCarthy
concocted the detail of an ad-hoc rationale programming dialect called Stutter between
1956 and 1958. (LISt processing dialect).Since at that point, numerous hundred "Stutter
lingos" have created (Conspire, Common Stutter, Clojure); however, making a Stutter
translator isn't a troublesome prepare for a Drawl software engineer (it only takes many
thousand instructions), compared to creating a compiler for a conventional dialect (which
needs a few tens of thousands of informational). Stutter was exceedingly prevalent within
the counterfeit insights segment until the 1990s since of its expressiveness and flexibility.
Another critical breakthrough within the history of AI was the improvement of a dialect
for communicating rationale rules and axioms.Around 1972, Alain Colmerauer and
Philippe Roussel concocted Prolog, a modern dialect (Programming in Rationale). Their
objective was to create a programming dialect that permits clients to indicate the expected
coherent rules of an arrangement, which the compiler at that point naturally changes over
into an arrangement of enlightening. Prolog could be a programming dialect that's utilized
in counterfeit insights and common dialect handling. Its language structure and semantics
are basic sufficient that non-programmers can get it them. One of the objectives was to
form an etymology apparatus that was too consistent with computer science.
Machine languages like as C/C++ and Fortran picked up noticeable quality within the
1990s, uprooting Stutter and Prolog. On these stages, more center was set on creating
logical computation capacities and libraries, which were used for strongly information
preparing employments or manufactured insights with early mechanical autonomy. Sun
Microsystems began a extend within the mid-1990s to construct a dialect that tended to
security vulnerabilities, disseminated programming, and multi-threading in C++. They
too need a stage that may be exchanged to a assortment of gadgets and stages.They
presented Java in 1995, which went much advance than C++ in terms of question
introduction. One of the foremost critical advancements to Java was the Java Virtual
Machine (JVM), which permitted the same code to execute on any gadget, autonomous of
its internal technology, without the got to pre-compile for each stage. This given extra AI
benefits that were executed in gadgets like as cloud servers and inserted computers.
Another outstanding viewpoint of Java was that it was one of the primary systems to
incorporate internet-specific devices, permitting users to execute applications within the
frame of java applets and javascripts (i.e. self-executing programmes) without having to
introduce anything. This incorporates a gigantic impact on the field of AI, as well as
laying the basis for web 2.0/3.0 and the web of things (IoT).

56
In any case, the advancement of AI utilizing absolutely procedural dialects was
exorbitant, time-consuming and mistake inclined. Thus, this turned the consideration into
other multiparadigm dialects that may combine highlights from useful and procedural
object-oriented dialects. Python, in spite of the fact that to begin with distributed in 1991,
begun to pick up ubiquity as an elective to C/C++ with Python 2.2 by 2001. The Python
concept was to have a dialect that might be as effective as C/C++ but too expressive and
down to earth for executing "scripts" like Shell Script. It was in 2008, with the
distribution of Python 3.0, which unraveled a few starting blemishes, when the dialect
begun to be considered a genuine contender for C++, java and other scripting dialects
such as Perl.
Since 2008, the Python community has been endeavoring to capture up to logical
computing dialects such as Matlab and R. Python is right now broadly used for AI
investigate due to its adaptability. In spite of the reality that Python has certain utilitarian
programming preferences, its run-time speeds are still impressively underneath those of
other useful dialects such as Drawl or Haskell, and much more behind C/C++. Besides, it
is wasteful when managing with colossal amounts of memory and frameworks that are
exceedingly concurrent.
Since 2010, IT organizations have looked for options by creating cross breed dialects that
combine the most excellent of all standards without relinquishing speed, capacity, or
concurrency, essentially driven by the got to interpret AI into commercial items (that may
be utilized by thousands or millions of clients in genuine time). In later a long time,
unused dialects like as Scala and Go, as well as Erlang and Clojure, have been used for
the most part on the server side for applications with tall concurrency and parallelization.
Facebook's Erlang execution and Google's Go usage are well-known cases. Julia and Lua
are two modern logical computIn spite of the fact that utilitarian programming is
prevalent in scholastics, it has as it were been utilized in many mechanical settings, and as
it were amid the period when "expert systems" were at their apex, for the most part within
the 1980s. Useful programming has been seen as a fizzled leftover of the master
frameworks time for numerous a long time after their end. In any case, as multiprocessors
and parallel computing become more common, more software engineers are turning to
utilitarian programming to induce the foremost out of their multicore processors.ing
dialects that have created.These profoundly costly computations are more often than not
required for overwhelming numerical operations or design coordinating, which constitute
a principal portion of running an AI framework. Within the future, we'll see modern

57
dialects that bring rearrangements on existing useful dialects such as Haskell and Erlang
and make this programming worldview more available. In expansion, the approach of the
internet-of-things (IoT) has drawn the consideration to the programming of inserted
frameworks. In this way, effectiveness, security and execution are once more things for
dialog. Modern dialects that can supplant C/C++ consolidating tips from useful
programming (e.g. Solution) will ended up progressively prevalent. Too, modern dialects
that join disentanglements as well as a set of capacities from advanced basic
programming, whereas keeping up a execution like C/C++ (e.g. Rust), will be another
future improvement.
“Programming languages have played a significant influence in the growth of artificial
intelligence. Hybrid languages are evolving as a result of the need to translate AI into
commercial goods. They incorporate the best of all paradigms without sacrificing speed,
capacity, or concurrency.”

4.7 Machine Vision


Programmed review, scene distinguishing proof, and robot route are all conceivable
utilizing machine vision, which combines picture collection and preparing with machine
learning. The essential sub-domains of machine vision are scene remaking, protest
recognizable proof, and acknowledgment.

4.8 Impact of Machine Vision


Picture capturing frameworks and computer vision calculations are utilized in machine
vision to offer mechanized review and robot directing. Machine vision frameworks are
not restricted to 2D unmistakable light, whereas being propelled by the human vision
framework, which is based on the extraction of conceptual data from two-dimensional
pictures. Single pillar lasers to 3D tall definition Light Location And Extending (LiDAR)
frameworks, too known as laser filtering 2D or 3D sonar sensors, and one or more 2D
camera frameworks are illustrations of optical sensors. In spite of this, the larger part of
machine vision applications depend on 2D image-based capture gadgets and computer
vision calculations that surmised human visual perception. People see the world in three
measurements, and their capacity to explore and total exercises depends on recreating
three-dimensional data from two-dimensional pictures in arrange to put themselves in
connection to the things around them. Taking after that, this information is coordinates
with existing information in arrange to perceive and recognize things in their environment

58
and to comprehend how they connected. The essential sub-domains of computer vision
are scene reproduction, question distinguishing proof, and acknowledgment.
The foremost prevalent ways to reproducing 3D data, notwithstanding of the picture
sensors utilized, are as a rule based on time-of-flight calculations, multi-view geometry,
and/or photometric stereo. In laser scanners, the previous is utilized to calculate the
remove between the light source and the thing based on the time it takes for the light to
reach the question and return. Since they are limited by the capacity to degree time, time-
of-flight strategies are utilized to appraise separations in kilometers and are exact to the
millimeter scale. Multi-view geometry challenges, on the other hand, incorporate
"structure" issues, "stereo correspondence" issues, and "movement" issues. The
recuperation of the 3D ‘structure' involves assessing the 3D arranges of a point based on
triangulation given two or more 2D projections of the same 3D point in two or more
pictures. The challenge of recognizing the picture point that matches to a point from
another 2D point of view is known as stereo correspondence. At long last, the issue of
recovering the camera facilitates from a collection of coordinating focuses in two or more
picture sees is alluded to as ‘motion.' Triangulation-based 3D laser scanners can
accomplish micrometre exactness, but their run is restricted to many meters.
The strong extraction of related noticeable points/features over pictures, known as
intrigued point recognizable proof, could be a prerequisite for stereo-vision. Photometric
changes, such as changes in lighting conditions, ought to not influence these
characteristics, and geometric changes ought to not influence them. A few methods have
been displayed by analysts over two decades. The Scale-invariant highlight change
(Filter) extricates highlights that are scale, revolution, and interpretation invariant, as well
as strong to lighting changes and gentle point of view adjustments. Since there are
thousands of objects that might have a place to an self-assertive number of categories at
the same time, speaking to and recognizing question categories has demonstrated to be
distant more troublesome to sum up and unravel than 3D recreation. A few concepts in
question location are connected to Gestalt brain research, which could be a mental
reasoning that bargains with visual discernment. The thought centers on gathering things
together based on vicinity, likeness, symmetry, shared destiny, progression, and other
variables. From the 1960s until the early 1990s, geometric shapes were the center of
protest acknowledgment inquire about. This was a bottom-up approach, in which a
restricted number of essential 3D dimensional objects are combined in different
combinations to deliver complex things. Within the 1990s, analysts looked at appearance-

59
based models, which were based on numerous learning of the object's appearance, which
was parameterized by pose and lighting [55]. Impediment, clutter, and twisting are all
issues with these approaches. Sliding window methods were created within the mid-to-
late 1990s whether an thing is identified for each event of a sliding window over an
picture [56].
The most challenges were how to plan highlights that speak to fittingly the appearance of
the question and how to productively look a huge number of positions and scales.
Neighborhood highlights approaches were too created and they pointed towards those
which were invariant to picture scaling, geometric changes and illumination changes [57].
Within the early 2000s 'parts-andshape' models in conjunction with 'bags of features' were
recommended. Parts-and-shape models speak to complex objects utilizing combinations
of multi-scaled deformable objects [58]. On the other hand, sacks of highlights strategies,
speak to visual highlights as words and relate question acknowledgment and picture
classification to the expressive conWithin the field of question acknowledgment, machine
learning helped the move from tending to issues as it were through numerical
demonstrating to learning calculations based on real-world information and factual
displaying. The approach of profound neural systems additionally the accessibility of
tremendous commented on picture datasets, such as ImageNet, driven to a enormous
breakthrough in protest recognizable proof and classification in 2012. Deep learning has
the good thing about implanting both highlight extraction and picture classification into
the structure of a neural organize, as contradicted to conventional protest recognizable
proof frameworks, which depend on highlight extraction taken after by include
coordinating approaches. trol of normal dialect preparing approaches [59].
Within the field of question acknowledgment, machine learning helped the move from
tending to issues as it were through numerical demonstrating to learning calculations
based on real-world information and factual displaying. The approach of profound neural
systems additionally the accessibility of tremendous commented on picture datasets, such
as ImageNet, driven to a enormous breakthrough in protest recognizable proof and
classification in 2012. Deep learning has the good thing about implanting both highlight
extraction and picture classification into the structure of a neural organize, as contradicted
to conventional protest recognizable proof frameworks, which depend on highlight
extraction taken after by include coordinating approaches.
Deep neural networks' prevalent execution driven to a rise in picture categorization from
72 percent in 2010 to 96 percent in 2015, beating human precision and having a major

60
impact in real-world applications [60]. Based on Hinton's profound neural arrange plan,
both Google and Baidu updated their picture look capabilities. Confront acknowledgment
has been included to a number of portable gadgets, and Apple has indeed built a pet
acknowledgment computer program. These models' protest recognizable proof and
picture categorization exactness surpasses that of people, causing waves of specialized
alter over the division.

Figure 7: A Timeline of imaging Devices/Hardware & Computer vision/ Concepts

4.9 Ethical and Legal Question of Ai


4.9.1 Ethical issues in Artificial intelligence
4.9.2 Threat to Privacy:
Data is AI's "fuel," and additional consideration must be paid to the data source and
whether or not security has been abused. Against such threats, defensive and preventative
gadgets must be made. In spite of the fact that the answers to this issue may have nothing
to do with AI, it is the work of AI administrators to guarantee that information protection

61
is kept up. Moreover, AI applications that will imperil a person's right to security ought to
be subjected to specific law that shields the person.

4.9.3: Threats to security and weaponization of AI


With the rise of security dangers like psychological warfare and territorial wars, we're
within the middle of a worldwide arms race that's made a want for AI-powered weaponry
like independent rambles and rockets, as well as virtual bots and noxious computer
program for advanced secret activities. This might result in a struggle with never-before-
seen levels of acceleration. The threat with manufactured insights is that we may lose
control of it. As a result, affiliations and non-governmental associations (NGOs) have
started to raise mindfulness around the utilize of military robots, with the objective of
restricting and maybe denying their utilization.

Figure 8: Countries using, owning or developing armed drones

The United States military has just issued a draught paper titled "Robotic and
Autonomous Systems Plan," which outlines its robotics and autonomous systems
strategy. The US military's employment of robots and autonomous systems outlines the
following five goals:
1) Improve knowledge capacities in the theatres of operations.
2) Reduce the quantity of charge carried by the soldier; and 3) boost logistics capabilities.
3) Make mobility and manoeuvring easier.
4) Strengthen the army' defences.

62
The four criteria presented by the US military are unclear in terms of their scope and
restrictions, despite the fact that offensive capability is not stated.

4.9.4 Economics and Employment Issues


Robots presently involve 8% of work, but this number is anticipated to develop to 26% by
2020. Robots will gotten to be more self-aware and able of collaboration, executing, and
making more complicated judgments. Robots presently have a impressive database much
obliged to 'big data,' which permits them to try and learn which calculations work best.
Labor may presently be supplanted by capital due to the expanded pace of mechanical
headway (apparatus). Be that as it may, there's a negative affiliation between the
probability of a profession's robotization and its normal annually salary, showing that
short-term dissimilarity may develop [61].The issue isn't so much with the sum of
occupations misplaced to mechanization because it is with creating sufficient to
compensate for any work misfortunes that will happen. Since they couldn't compete with
the speed of development in modern innovation, rising segments enrolled more laborers
than those who misplaced their work in firms that collapsed amid prior mechanical
transformations [62]. The thought that not fair manual creates, but too occupations
including halfway exercises, such as secretarial, regulatory, and other office work, are
likely to be mechanized is an basic comment to form with respect to this transformation.
To address this issue, legislative structures must be put in place to ensure that the
advantages of automation are spread fairly rather than simply to the employer, ensuring
the continuation of education, health, and retirement.

4.9.5 Human Bias in Artificial Intelligence


Analysts found how machine learning innovation replicates human inclination, for
superior or more regrettable, in a paper distributed in Science magazine. Words
associated with the lexical space of blooms are connected to sentiments of bliss and joy
(flexibility, cherish, peace, delight, heaven, etc.). Insect-related words, on the other hand,
are close to negative expressions (passing, contempt, revolting, sickness, torment, etc.).
This reflects the joins that people have made themselves. AI predispositions have as of
now been highlighted in other applications. One of the foremost outstanding was likely
Tay, a Microsoft AI propelled in 2016, which was gathered to epitomize a youngster on
Twitter, able to chat with web clients and move forward through conversations. However,
in fair many hours, the program, learning from its trades with people, started to create
bigot and anti-Semitic comments, some time recently being suspended by Microsoft (see

63
Sidebar – Disappointments of AI). The issue isn't as it were at dialect level. When an AI
program got to be a legal hearer in a magnificence challenge in September 2016, it
dispensed with most dark candidates as the information on which it had been prepared to
recognize “beauty” did not contain sufficient dark cleaned individuals.

4.10 Legal Issue and Question of Ai


Legitimate Suggestions At first, the lawful system that would apply to robots and AI
would have the objective of lessening the perils related with their operation as well as the
harm which will result from unexpected results. In spite of the fact that AI cannot have
protected rights since they are the property of people, they can have a few property rights
to guarantee their potential responsibility for any harm committed. As a result, they may
be allowed a few sort of lawful security. In this circumstance, robots and AI would be
held responsible in two ways:
1) Their activities are predictable;
2) Any negative effects of their conduct are subject to civil culpability (although there is
also a fiscal responsibility as a consequence of non-compliance with the obligations of
this type).
A Suggestions Report (2015/2103, dated May 31) is presently being arranged by the
European Parliament. 2016) on mechanical technology gracious law, which sets up
standards for controlling respectful obligation stemming from robot utilization [63]. It
alludes to the legally binding and non-contractual obligations which will emerge as a
result of its activities, and it proposes that this risk be characterized as an objective, as
well as setting up the require for required respectful obligation protections for any harms
coming about from the ownership and use of such robots. That's to say, within the case of
a mishap, the consider suggests a required protections arrange, comparable to that utilized
by vehicles.
Producers will be committed beneath contract to pay potential casualties and to set up a
support to ensure against robot disasters. The effect of "short-circuits," which protect
humans from mischances or violence, is additionally considered within the paper. This
does not cruel that man's obligation will be totally dispensed with; or maybe, a sliding
scale may be set up, with the architect bearing more prominent obligation as the robot's
modernity increments. Independent machines (cars without drivers, rambles, and
restorative contraptions) will before long be subject to legitimate duty, much obliged to
later government approaches. A title, a to begin with title, and a enlistment number are

64
required for independent gadgets. Identification will be assisted by some type of civil
status in the case of an accident.

4.11: Civil Rights for AI and Robots


Robots can be agreed the status of electronic individuals, with extraordinary rights and
obligations, to the degree that they are independent. There are moreover demands to
accommodate the coexistence of people and robots. Residential robots, for example, are
considered "hint gadgets." They inject sympathy within the individuals they come into
contact with on a standard premise. This sort of association may be solidified in law on
the off chance that there was a legal system in put. This could be compared to residential
creatures, whose lawful status was characterized in January 2015. There's a differentiate,
in any case, in that robots, not at all like creatures, are not physiologically living and need
sensibility. They do, in any case, have an insights that can be higher to that of an creature.
The concept of giving AI frameworks and robots rights comparable to those allowed to
residential creatures requires a information of how computers seem handle their possess
sentiments and feelings on the off chance that they are invested with passionate insights
within the future. In terms of the work showcase, the work of robots will result within the
cancelation of a few assignments that have already been performed by people. The EU
parliament suggested that robots and autonomous systems pay social security payments
and taxes as if they were humans to mitigate the societal effect of unemployment created
by robots and autonomous systems. They produce an economic gain by creating surplus
value from their job. It's one of the most contentious aspects of the EU Robotics Report's
recommendations.

4.12 Limitation and Opportunities of Ai


In spite of the fact that AI has the potential to modify the world, there are still various
deterrents to overcome some time recently it can be broadly utilized. Moreover, it isn't
without blemishes in its real utilize (see Fig 6, Illustration Disappointments of AI).
Profound learning has as of late started a surge of intrigued, with energizing
breakthroughs that will alter AI's future. In any case, profound learning is fair one of the
various methods made by the AI community over the long time. It's basic to consider AI's
show state of improvement as well as its interesting limits.

4.13 Intelligence as a multi-component model:


To be named "shrewdly," a machine must meet a number of necessities, counting the
capacity to reason, develop models, comprehend the genuine world, and anticipate what

65
will happen following. Discernment, common sense, arranging, relationship, dialect, and
thinking are high-level components of the thought of "brilliantly."

4.14 Large datasets and hard generalization:


Machines are as of now competent of distinguishing pictures and translating discourse
after seriously preparing on huge datasets. These abilities are accomplished by means of
the utilize of measurable gauges based on the information given. When the framework is
constrained to manage with novel circumstances with small preparing information, the
demonstrate habitually comes up short. We know that people can perceive modest
amounts of information since we will theoretical concepts and rules and apply them to a
wide run of circumstances. This level of reflection and generalizability is still missing in
today's AI frameworks.

4.15 Black box and a lack of interpretation:


The existing AI framework too encompasses a issue with translation. Profound neural
systems, for case, incorporate millions of parameters, making it difficult to comprehend
why the arrange produces great or appalling results. In spite of later work on showing
high-level highlights utilizing weight channels in a convolution neural organize, the
prepared models are regularly incomprehensible. As a result, most analysts treat existing
AI methods as in case they were a dark box.

4.16 Robustness of AI:


Most existing AI frameworks are defenseless to being tricked, which is an issue that
impacts about all machine learning approaches. Notwithstanding of these issues, AI will
without a doubt play a critical portion in our future lives. As information gets to be more
promptly accessible, people will progressively depend on AI frameworks to live, work,
and appreciate themselves. As a result, it's no shock that huge tech companies are
essentially contributing in AI-related innovation. AI frameworks are required in
numerous application regions to oversee information that's getting to be progressively
complicated.AI frameworks will be used in a developing number of businesses, counting
keeping money, medication, vitality, fabricating, instruction, transportation, and open
administrations, as their precision and modernity make strides. In a few of these
segments, they may supplant costly human work, create modern apps, and collaborate
with/for individuals to move forward benefit guidelines. The period of improved insights
is expected to be the following organize of AI. Shrewdly implanted frameworks will
constitute a common expansion of human creatures and our physical capacities, much

66
obliged to omnipresent sensor frameworks and wearable innovation. Human discernment,
data recovery, and physical capacity are all confined, but AI frameworks are not. AI
calculations and advanced sensor frameworks may screen the environment around us and
comprehend our eagerly, permitting us to communicate with one another in a consistent
manner. AI progressions will moreover be pivotal in reenacting human brain work.
Progresses in detecting and preparing innovation will make it conceivable to interface
brain work to human conduct at a level where AI self-awareness and feelings may be
replicated and watched more practically. Quantum computing has recently piqued the
interest of academic institutions as well as technology giants like Google, IBM, and
Microsoft. Although the area is still in its infancy and there are significant obstacles to
overcome, the processing power it promises, which may be useful in the field of AI, is
much beyond our wildest dreams.

Figure 9: Example failures of AI

67
CHAPTER 5
CONCLUSION AND RECOMMENDATION
There are a few lessons to be learned from AI's past accomplishments and botches. To
keep AI advancing, a sensible and agreeable transaction between application-specific
activities and visionary inquire about concepts is vital. Alongside AI's unparalleled
ubiquity, there are concerns with respect to the technology's impact on society. To ensure
that society as a entirety benefits from AI's development and its conceivable negative
results are diminished from the begin, a clear arrange must be created that considers the
going with moral and lawful issues. Such stresses ought to not smother AI's development,
but or maybe energize the creation of a deliberate establishment on which future AI may
flourish. Most vitally, it is crucial to recognize science fiction from real reality. AI has the
potential to modify our society's future - our lives, our living situations, and our economy
- with proceeded financing and judicious investment. The taking after recommendations
are vital to the logical community, trade, government organizations, and policymakers
within the Joined together United Kingdom:
 Robotics and artificial intelligence are getting to be progressively basic within the
UK economy and its future development. We must be open to and totally arranged
for the changes they will bring to our society, as well as the impact they will have
on the work structure and the abilities base. More grounded national association is
required to ensure that the common open incorporates a clear and exact
understanding of robots and AI's display and future improvements.
 The UK needs a strong mechanical autonomy and AI inquire about and
improvement establishment, particularly in regions where we as of now have a
basic mass and an universal lead. Supported speculation in robots and AI would
secure the UK's inquire about base's future development, and cash would be
required to bolster critical Clusters/Centres of Brilliance that are globally driving
and weighted toward activities with more noteworthy societal and financial
esteem.
 For practical deployment and responsible innovation of robotics and AI, it is
critical to address legal, regulatory, and ethical issues; more effort should be put
into assessing the economic impact and understanding how to maximise the
benefits of these technologies while mitigating negative effects.

68
 The government must give unmistakable help to the specialists by adjusting their
abilities and helping businesses in creating modern technology-based conceivable
outcomes. Keeping up the UK's competitiveness requires computerized abilities
preparing and re-education of the existing workforce.
 In a few zones of RAS and AI, the Joined together Kingdom contains a strong
track record. Maintained venture in mechanical autonomy and manufactured
insights is imperative to the UK's investigate base's future development and
around the world administration. It's moreover critical to contribute in and prepare
the another era to be robots and AI specialists with a strong STEM foundation,
using unused innovative capacities viably.

69
REFERENCES
A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, et al.,
"Dermatologist-level classification of skin cancer with deep neural networks,"
Nature, vol. 542, pp. 115-+, Feb 2 2017.
A. J. Gonzalez and V. Barr, "Validation and verification of intelligent systems-what are
they and how are they different?," Journal of Experimental & Theoretical
Artificial Intelligence, vol. 12, pp. 407-420, 2000.
A. Young and M. Yung, "Deniable password snatching: On the possibility of evasive
electronic espionage," in Security and Privacy, 1997. Proceedings., 1997 IEEE
Symposium on, 1997, pp. 224-235.
Abresch, J., Hanson, A., Heron, S. J., & Rheeling, P. J. (2008). What the Future Holds:
Trends in GIS and Academic Libraries. In Integrating Geographic Information
Systems into Library Services: A Guide for Academic Libraries (pp. 267-295).
IGI Global.

Aldosari, S. A. M. (2020). The future of higher education in the light of artificial


intelligence transformations. International Journal of Higher Education, 9(3), 145-
151.

Atabekov, A., & Yastrebov, O. (2018). Legal status of artificial intelligence across
countries: Legislation on the move. European Research Studies Journal, 21(4),
773-782.

Boyd Cohen, “The 10 Smartest Cities in North America,” Fast Company, November 14,
2013.
Bravo, C. E., Saputelli, L. A., Rivas, F. I., Perez, A. G., Nikolaou, M., Zangl, G., ... &
Nunez, G. (2012, January). State-of-the-art application of artificial intelligence
and trends in the E&P industry: A technology survey. In SPE Intelligent Energy
International. Society of Petroleum Engineers.

Bravo, C. E., Saputelli, L., Rivas, F., Pérez, A. G., Nickolaou, M., Zangl, G., ... & Nunez,
G. (2014). State of the art of artificial intelligence and predictive analytics in the
E&P industry: a technology survey. Spe Journal, 19(04), 547-563.

Brenner, W., Zarnekow, R., & Wittig, H. (2012). Intelligent software agents: foundations
and applications. Springer Science & Business Media.

70
Byungura, J. C., Hansson, H., & Kharunaratne, T. (2015, June). User perceptions on
relevance of a learning management system: An evaluation of Behavioural
intention and usage of SciPro system at University of Rwanda. In EDEN
Conference Proceedings (No. 1, pp. 548-562).

C. Bryant and R. Waters, "Worker at Volkswagen plant killed in robot accident," in


Finantial Times, ed, 2015.
C. M. R. Christine Zhen-Wei Qiang, Kaoru Kimura, "Economic Impacts of Broadband,"
in The World Bank, ed, 2009.
C.-A. Smarr, A. Prakash, J. M. Beer, T. L. Mitzner, C. C. Kemp, and W. A. Rogers,
"Older adults’ preferences for and acceptance of robot assistance for everyday
living tasks," in Proceedings of the Human Factors and Ergonomics Society
Annual Meeting, 2012, pp. 153-157.
Cameron Kerry and Jack Karsten, “Gauging Investment in Self-Driving Cars,” Brookings
Institution, October 16, 2017.
Chen, X., Chen, J., Cheng, G., & Gong, T. (2020). Topics and trends in artificial
intelligence assisted human brain research. PloS one, 15(4), e0231192.

Console, L., Picardi, C., & Duprè, D. T. (2003). Temporal decision trees: Model-based
diagnosis of dynamic systems on-board. Journal of artificial intelligence
research, 19, 469-512.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning internal representations by


error propagation," DTIC Document1985.
D. Floreano and R. J. Wood, "Science, technology and the future of small autonomous
drones," Nature, vol. 521, pp. 460-466, 2015.
D. Hémous and M. Olsen, "The Rise of the Machines: Automation, Horizontal Innovation
and Income Inequality," 2016.
D. Kirat, G. Vigna, and C. Kruegel, "Barecloud: bare-metal analysis-based evasive
malware detection," in 23rd USENIX Security Symposium (USENIX Security
14), 2014, pp. 287- 301.
D. Ravi, C. Wong, F. Deligianni, M. Berthelot, J. Andreu-Perez, B. Lo, et al., "Deep
Learning for Health Informatics," IEEE Journal of Biomedical and Health
Informatics, vol. 21, pp. 4-21, Jan 2017

71
Daisuke Wakabayashi, “Self-Driving Uber Car Kills Pedestrian in Arizona, Where
Robots Roam,” New York Times, March 19, 2018.
Duch, W. (2003). What is computational intelligence and what could it
become. Computational Intelligence, Methods and Applications Lecture Notes
NTU, Singapour.

E. Broadbent, R. Stafford, and B. MacDonald, "Acceptance of healthcare robots for the


older population: review and future directions," International Journal of Social
Robotics, vol. 1, pp. 319-330, 2009.
H. Arisumi, S. Miossec, J.-R. Chardonnet, and K. Yokoi, "Dynamic lifting by whole body
motion of humanoid robots," in Intelligent Robots and Systems, 2008. IROS 2008.
IEEE/RSJ International Conference on, 2008, pp. 668-675.
H. Murase and S. K. Nayar, "Visual Learning and Recognition of 3-D Objects from
Appearance," International Journal of Computer Vision, vol. 14, pp. 5-24, Jan
1995.
Haag, ‘KollaborativesArbeitenmitRobotern – Visionen und realistischePerspektive’ in
Botthof and Hartmann (eds), Zukunft der Arbeit in Industrie 4.0 (2015) 63.
Holzinger, A. (2016). Interactive machine learning for health informatics: when do we
need the human-in-the-loop?. Brain Informatics, 3(2), 119-131.

J. Andreu-Perez, C. C. Poon, R. D. Merrifield, S. T. Wong, and G.-Z. Yang, "Big data for
health," IEEE journal of biomedical and health informatics, vol. 19, pp. 1193-
1208, 2015.
J. Andreu-Perez, D. R. Leff, K. Shetty, A. Darzi, and G.-Z. Yang, "Disparity in Frontal
Lobe Connectivity on a Complex Bimanual Motor Task Aids in Classification of
Operator Skill Level," Brain connectivity, vol. 6, pp. 375-388, 2016.
J. Harrison, K. Izzetoglu, H. Ayaz, B. Willems, S. Hah, U. Ahlstrom, et al., "Cognitive
workload and learning assessment during the implementation of a next-generation
air traffic control technology using functional near-infrared spectroscopy," IEEE
Transactions on Human-Machine Systems, vol. 44, pp. 429-440, 2014.
J. J. Hopfield, "Neural networks and physical systems with emergent collective
computational abilities," Proceedings of the national academy of sciences, vol. 79,
pp. 2554-2558, 1982.
J. McCarthy, Programs with common sense: RLE and MIT Computation Center, 1960.

72
Johnson, W. L., & Lester, J. C. (2016). Face-to-face interaction with pedagogical agents,
twenty years later. International Journal of Artificial intelligence in education,
26(1), 25-36.
K. Mochizuki, S. Nishide, H. G. Okuno, and T. Ogata, "Developmental human-robot
imitation learning of drawing with a neuro dynamical system," in Systems, Man,
and Cybernetics (SMC), 2013 IEEE International Conference on, 2013, pp. 2336-
2341.
Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson, “Learning from Public
Sector Experimentation with Artificial Intelligence,” TechTank (blog), Brookings
Institution, June 23, 2017.
Krischke and Schmidt, ‘KollegeRoboter’ (2015) 38/2015 Focus Magazin 66 See:
www.faz.net/aktuell/wirtschaft/fuehrung-und-digitalisierung-mein-chef-der-
roboter-14165244.html (last accessed on 8 April 2016).
L. A. Zadeh, "Fuzzy logic—a personal perspective," Fuzzy Sets and Systems, vol. 281,
pp. 4-20, 2015
L. Zhang, M. Jiang, D. Farid, and M. A. Hossain, "Intelligent facial emotion recognition
and semantic-based topic detection for a humanoid robot," Expert Systems with
Applications, vol. 40, pp. 5160-5168, 2013.
M. Asada, "Towards artificial empathy," International Journal of Social Robotics, vol. 7,
pp. 19-33, 2015.
M. E. Virgillito, "Rise of the robots: technology and the threat of a jobless future," Labor
History, vol. 58, pp. 240-242, 2017.
M. T. Chan, R. Gorbet, P. Beesley, and D. Kulic, "CuriosityBased Learning Algorithm
for Distributed Interactive Sculptural Systems," in Intelligent Robots and Systems
(IROS), 2015 IEEE/RSJ International Conference on, 2015, pp. 3435-3441
M. Wooldridge and N. R. Jennings, "Intelligent agents: Theory and practice," The
knowledge engineering review, vol. 10, pp. 115-152, 1995.
Martínez, D. M., & Fernández-Rodríguez, J. C. (2015). Artificial Intelligence applied to
project success: a literature review. IJIMAI, 3(5), 77-84.

Maschke and Werner, ‘Arbeiten 4.0 – Diskurs und Praxis in Betriebsvereinbarungen’


(October 2015) Hans BöcklerStiftung, Report No 14, 9.
https://www.roboticstomorrow.com/story/2021/03/the-ai-impact-for-next-gen-
industrial-robots/16400/

73
McArthur, S. D., Davidson, E. M., Catterson, V. M., Dimeas, A. L., Hatziargyriou, N. D.,
Ponci, F., & Funabashi, T. (2007). Multi-agent systems for power engineering
applications—Part II: Technologies, standards, and tools for building multi-agent
systems. IEEE Transactions on Power Systems, 22(4), 1753-1759.

N. Chen, L. Christensen, K. Gallagher, R. Mate, and G. Rafert, "Global Economic


Impacts Associated with Artificial Intelligence," Study, Analysis Group, Boston,
MA, February, vol. 25, 2016.
N. Czernich, O. Falck, T. Kretschmer, and L. Woessmann, "Broadband Infrastructure and
Economic Growth," Economic Journal, vol. 121, pp. 505-532, May 2011.
N. Mavridis, "A review of verbal and non-verbal human–robot interactive
communication," Robotics and Autonomous Systems, vol. 63, pp. 22-35, 2015.
N. Spinrad, "Mr Singularity," Nature, vol. 543, pp. 582-582, 2017.
Oke, S. A. (2008). A literature review on artificial intelligence. International journal of
information and management sciences, 19(4), 535-570.

P. Alston, "Lethal robotic technologies: the implications for human rights and
international humanitarian law," JL Inf. & Sci., vol. 21, p. 35, 2011.
P. Viola and M. Jones, "Robust real-time face detection," Eighth IEEE International
Conference on Computer Vision, pp. 747-747, 2001.
P.-Y. Oudeyer, "Socially guided intrinsic motivation for robot learning of motor skills,"
Autonomous Robots, vol. 36, pp. 273-294, 2014.
Peter Holley, “Uber Signs Deal to Buy 24,000 Autonomous Vehicles from Volvo,”
Washington Post, November 20, 2017.
Portions of this section are drawn from Darrell M. West, “Driverless Cars in China,
Europe, Japan, Korea, and the United States,” Brookings Institution, September
2016 Ibid.
R. C. O'Reilly and Y. Munakata, Computational explorations in cognitive neuroscience:
Understanding the mind by simulating the brain: MIT press, 2000.
R. Fergus, P. Perona, and A. Zisserman, "Object class recognition by unsupervised scale-
invariant learning," 2003 IEEE Computer Society Conference on Computer
Vision and Pattern Recognition, pp. 264-271, 2003.
Rosati, R. (1999). Reasoning about minimal belief and negation as failure. Journal of
Artificial Intelligence Research, 11, 277-300.

74
S. J. Russell and P. Norvig, Artificial intelligence: a modern approach (3rd edition):
Prentice Hall, 2009.
S. Lohr, "The age of big data," New York Times, vol. 11, 2012.
S. Ratschan and Z. She, "Safety verification of hybrid systems by constraint propagation
based abstraction refinement," in International Workshop on Hybrid Systems:
Computation and Control, 2005, pp. 573-589.
Singer, J., Gent, I. P., & Smaill, A. (2000). Backbone fragility and the local search cost
peak. Journal of Artificial Intelligence Research, 12, 235-270.

Sinner, A., Leggo, C., Irwin, R. L., Gouzouasis, P., & Grauer, K. (2006). Arts-Based
Educational Research Dissertations: Reviewing the Practices of New
Scholars. Canadian Journal of education, 29(4), 1223-1270.

T. Kruse, A. K. Pandey, R. Alami, and A. Kirsch, "Humanaware robot navigation: A


survey," Robotics and Autonomous Systems, vol. 61, pp. 1726-1743, 2013.
T. Leung and J. Malik, "Representing and recognizing the visual appearance of materials
using three-dimensional textons," International Journal of Computer Vision, vol.
43, pp.29-44, 2001
T. R. Society, "Machine learning: the power and promise of computers that learn by
example," ed. The Royal Society, 2017.
Taylor, B. J. (Ed.). (2006). Methods and procedures for the verification and validation of
artificial neural networks. Springer Science & Business Media.

Teena Maddox, “66% of US Cities Are Investing in Smart City Technology,”


TechRepublic, November 6, 2017. S. Inc, "Artificial Intelligence (AI)," 2016.
Tuomi, I. (2018). The impact of artificial intelligence on learning, teaching, and
education. Luxembourg: Publications Office of the European Union.

Y. Ohmura and Y. Kuniyoshi, "Humanoid robot which can lift a 30kg box by whole body
contact and tactile feedback," in Intelligent Robots and Systems, 2007. IROS
2007. IEEE/RSJ International Conference on, 2007, pp. 1136-1141.
YumingGe, Xiaoman Liu, Libo Tang, and Darrell M. West, “Smart Transportation in
China and the United States,” Center for Technology Innovation, Brookings
Institution, December 2017.

75
Z. Chen, X. Jia, A. Riedel, and M. Zhang, "A bio-inspired swimming robot," in Robotics
and Automation (ICRA), 2014 IEEE International Conference on, 2014, pp. 2564-
2564.
Z. Kappassov, J.-A. Corrales, and V. Perdereau, "Tactile sensing in dexterous robot hands
—Review," Robotics and Autonomous Systems, vol. 74, pp. 195-220, 2015.
Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic
review of research on artificial intelligence applications in higher education–
where are the educators?. International Journal of Educational Technology in
Higher Education, 16(1), 1-27.

Zebulum, R. S., Pacheco, M. A., & Vellasco, M. M. B. (2018). Evolutionary electronics:


automatic design of electronic circuits and systems by genetic algorithms. CRC
press.

Zhang, Z., Long, K., Wang, J., & Dressler, F. (2013). On swarm intelligence inspired
self-organized networking: its bionic mechanisms, designing principles and
optimization approaches. IEEE Communications Surveys & Tutorials, 16(1), 513-
537.

76

You might also like