Artificial Intelligence: Chinmayi V., Navya K

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Chinmayi V., Navya K.

Computer Science Engineering,


Bhoj Reddy Engineering College for Women.
Email ID:navya_reddyk25@yahoo.com
Email ID: chinmayi.meeragmail.com
ARTIFICIAL INTELLIGENCE
Abstract:

Artificial intelligence (AI) is the intelligence of machines and the branch of


computer science that aims to create it. Textbooks define the field as "the study and
design of intelligent agents”. The field was founded on the claim that a central property
of humans, intelligence—the sapience of Homo sapiens—can be so precisely described
that it can be simulated by a machine. This raises philosophical issues about the nature of
the mind and limits of scientific hubris, issues which have been addressed by myth,
fiction and philosophy since antiquity. Artificial intelligence has been the subject of
optimism, but has also suffered setbacks and, today, has become an essential part of the
technology industry, providing the heavy lifting for many of the most difficult problems
in computer science.
Mechanical or "formal" reasoning has been developed by philosophers and
mathematicians since antiquity. The study of logic led directly to the invention of the
programmable digital electronic computer, based on the work of mathematician Alan
Turing and others.
The general problem of simulating (or creating) intelligence has been broken
down into a number of specific sub-problems. These consist of particular traits or
capabilities that researchers would like an intelligent system to display. The traits have
received the most attention, like Deduction, Reasoning, Problem solving, learning,
motion capturing and manipulation, etc.
Artificial intelligence has been used in a wide range of fields including medical
diagnosis, stock trading, robot control, law, scientific discovery and toys. However, many
AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general
applications, often without being called AI because once something becomes useful
enough and common enough it's not labeled AI anymore.
Introduction: animated statues were seen in Egypt and

Artificial intelligence (AI) is the Greece and humanoid automatons were

intelligence of machines and the branch built by Yan Shi, Hero of Alexandria,

of computer science that aims to create Al-Jazari and Wolfgang von Kempelen.

it. Textbooks define the field as "the


Mechanical or "formal" reasoning has
study and design of intelligent agents”.
been developed by philosophers and
The field was founded on the claim that
mathematicians since antiquity. The
a central property of humans,
study of logic led directly to the
intelligence—the sapience of Homo
invention of the programmable digital
sapiens—can be so precisely described
electronic computer, based on the work
that it can be simulated by a machine.
of mathematician Alan Turing and
This raises philosophical issues about
others. Turing's theory of computation
the nature of the mind and limits of
suggested that a machine, by shuffling
scientific hubris, issues which have been
symbols as simple as "0" and "1", could
addressed by myth, fiction and
simulate any conceivable act of
philosophy since antiquity. Artificial
mathematical deduction. This, along
intelligence has been the subject of
with recent discoveries in neurology,
optimism, but has also suffered setbacks
information theory and cybernetics,
and, today, has become an essential part
inspired a small group of researchers to
of the technology industry, providing the
begin to seriously consider the
heavy lifting for many of the most
possibility of building an electronic
difficult problems in computer science.
brain.

Thinking machines and artificial beings


The field of AI research was founded at
appear in Greek myths, such as Talos of
a conference on the campus of
Crete, the golden robots of Hephaestus
Dartmouth College in the summer of
and Pygmalion's Galatea. Human
1956. The attendees, including John
likenesses believed to have intelligence
McCarthy, Marvin Minsky, Allen
were built in every major civilization:
Newell and Herbert Simon, became the
leaders of AI research for many decades. subproblems, the creation of new ties
They and their students wrote programs between AI and other fields working on
that were, to most people, simply similar problems, and above all a new
astonishing: computers were solving commitment by researchers to solid
word problems in algebra, proving mathematical methods and rigorous
logical theorems and speaking English. scientific standards.
By the middle of the 1960s, research in
the U.S. was heavily funded by the Problems
Department of Defense and laboratories
had been established around the world. The general problem of simulating (or
AI's founders were profoundly creating) intelligence has been broken
optimistic about the future of the new down into a number of specific sub-
field: Herbert Simon predicted that problems. These consist of particular
"machines will be capable, within traits or capabilities that researchers
twenty years, of doing any work a man would like an intelligent system to
can do" and Marvin Minsky agreed, display. The traits described below have
writing that "within a generation ... the received the most attention.
problem of creating 'artificial
Deduction, reasoning, problem
intelligence' will substantially be
solving
solved".

Early AI researchers developed


In the 1990s and early 21st century, AI
algorithms that imitated the step-by-step
achieved its greatest successes, albeit
reasoning that human were often
somewhat behind the scenes. Artificial
assumed to use when they solve puzzles,
intelligence is used for logistics, data
play board games or make logical
mining, medical diagnosis and many
deductions. By the late 1980s and '90s,
other areas throughout the technology
AI research had also developed highly
industry. The success was due to several
successful methods for dealing with
factors: the incredible power of
uncertain or incomplete information,
computers today (see Moore's law), a
employing concepts from probability
greater emphasis on solving specific
and economics.
For difficult problems, most of these world. Among the things that AI needs
algorithms can require enormous to represent are: objects, properties,
computational resources — most categories and relations between objects;
experience a "combinatorial explosion": situations, events, states and time; causes
the amount of memory or computer time and effects; knowledge about knowledge
required becomes astronomical when the (what we know about what other people
problem goes beyond a certain size. The know); and many other, less well
search for more efficient problem researched domains. A complete
solving algorithms is a high priority for representation of "what exists" is an
AI research. ontology (borrowing a word from
traditional philosophy), of which the
Human beings solve most of their
most general are called upper ontologies.
problems using fast, intuitive judgments
rather than the conscious, step-by-step
deduction that early AI research was
Among the most difficult problems in
able to model. AI has made some
knowledge representation are:
progress at imitating this kind of "sub-
symbolic" problem solving: embodied
1. Default reasoning and the
agent approaches emphasize the
qualification problem
importance of sensorimotor skills to
2. The breadth of commonsense
higher reasoning; neural net research
knowledge
attempts to simulate the structures inside
3. The subsymbolic form of some
human and animal brains that give rise
commonsense knowledge
to this skill.
Planning
Knowledge representation
Intelligent agents must be able to set
Knowledge representation and goals and achieve them. They need a
knowledge engineering are central to AI way to visualize the future (they must
research. Many of the problems have a representation of the state of the
machines are expected to solve will world and be able to make predictions
require extensive knowledge about the about how their actions will change it)
and be able to make choices that mathematical analysis of machine
maximize the utility (or "value") of the learning algorithms and their
available choices. performance is a branch of theoretical
computer science known as
Multi-agent planning uses the
computational learning theory.
cooperation and competition of many
agents to achieve a given goal. Emergent Natural language processing
behavior such as this is used by
evolutionary algorithms and swarm Natural language processing gives

intelligence. machines the ability to read and


understand the languages that humans
Learning speak. Many researchers hope that a
sufficiently powerful natural language
Machine learning has been central to AI
processing system would be able to
research from the beginning.
acquire knowledge on its own, by
Unsupervised learning is the ability to
reading the existing text available over
find patterns in a stream of input.
the internet. Some straightforward
Supervised learning includes both
applications of natural language
classification and numerical regression.
processing include information retrieval
Classification is used to determine what
(or text mining) and machine translation.
category something belongs in, after
seeing a number of examples of things Motion and manipulation
from several categories. Regression
takes a set of numerical input/output The field of robotics is closely related to

examples and attempts to discover a AI. Intelligence is required for robots to

continuous function that would generate be able to handle such tasks as object

the outputs from the inputs. In manipulation and navigation, with sub-

reinforcement learning the agent is problems of localization (knowing where

rewarded for good responses and you are), mapping (learning what is

punished for bad ones. These can be around you) and motion planning

analyzed in terms of decision theory, (figuring out how to get there).

using concepts like utility. The


Perception Creativity

Machine perception is the ability to use A sub-field of AI addresses creativity


input from sensors (such as cameras, both theoretically (from a philosophical
microphones, sonar and others more and psychological perspective) and
exotic) to deduce aspects of the world. practically (via specific implementations
Computer vision is the ability to analyze of systems that generate outputs that can
visual input. A few selected subproblems be considered creative). A related area of
are speech recognition, facial computational research is Artificial
recognition and object recognition. Intuition and Artificial Imagination.

Social intelligence General intelligence

Emotion and social skills play two roles Most researchers hope that their work
for an intelligent agent. First, it must be will eventually be incorporated into a
able to predict the actions of others, by machine with general intelligence
understanding their motives and (known as strong AI), combining all the
emotional states. (This involves skills above and exceeding human
elements of game theory, decision abilities at most or all of them. A few
theory, as well as the ability to model believe that anthropomorphic features
human emotions and the perceptual like artificial consciousness or an
skills to detect emotions.) Also, for good artificial brain may be required for such
human-computer interaction, an a project.
intelligent machine also needs to display
Many of the problems above are
emotions. At the very least it must
considered AI-complete: to solve one
appear polite and sensitive to the humans
problem, you must solve them all. For
it interacts with. At best, it should have
example, even a straightforward, specific
normal emotions itself.
task like machine translation requires
that the machine follow the author's
argument (reason), know what is being
talked about (knowledge), and faithfully
reproduce the author's intention (social Cybernetics and brain
intelligence). Machine translation, simulation
therefore, is believed to be AI-complete:
it may require strong AI to be done as In the 1940s and 1950s, a number of
well as humans can do it. researchers explored the connection
between neurology, information theory,
and cybernetics. Some of them built
machines that used electronic networks

Approaches to exhibit rudimentary intelligence, such


as W. Grey Walter's turtles and the

There is no established unifying theory Johns Hopkins Beast. Many of these

or paradigm that guides AI research. researchers gathered for meetings of the

Researchers disagree about many issues. Teleological Society at Princeton

A few of the most long standing University and the Ratio Club in

questions that have remained England. By 1960, this approach was

unanswered are these: should artificial largely abandoned, although elements of

intelligence simulate natural intelligence, it would be revived in the 1980s.

by studying psychology or neurology?


Symbolic
Or is human biology as irrelevant to AI
research as bird biology is to
aeronautical engineering? Can intelligent Cognitive simulation

behavior be described using simple, Economist Herbert Simon and

elegant principles (such as logic or Allen Newell studied human

optimization)? Or does it necessarily problem solving skills and

require solving a large number of attempted to formalize them, and

completely unrelated problems? Can their work laid the foundations of

intelligence be reproduced using high- the field of artificial intelligence,

level symbols, similar to words and as well as cognitive science,

ideas? Or does it require "sub-symbolic" operations research and

processing? management science. Their


research team used the results of
psychological experiments to Researchers at MIT (such as
develop programs that simulated Marvin Minsky and Seymour
the techniques that people used Papert) found that solving
to solve problems. This tradition, difficult problems in vision and
centered at Carnegie Mellon natural language processing
University would eventually required ad-hoc solutions – they
culminate in the development of argued that there was no simple
the Soar architecture in the and general principle (like logic)
middle 80s. that would capture all the aspects
Logic based of intelligent behavior. Roger
Unlike Newell and Simon, John Schank described their "anti-
McCarthy felt that machines did logic" approaches as "scruffy"
not need to simulate human (as opposed to the "neat"
thought, but should instead try to paradigms at CMU and
find the essence of abstract Stanford). Commonsense
reasoning and problem solving, knowledge bases (such as Doug
regardless of whether people Lenat's Cyc) are an example of
used the same algorithms. His "scruffy" AI, since they must be
laboratory at Stanford (SAIL) built by hand, one complicated
focused on using formal logic to concept at a time.
solve a wide variety of problems,
including knowledge Knowledge based
representation, planning and When computers with large
learning. Logic was also focus of memories became available
the work at the University of around 1970, researchers from all
Edinburgh and elsewhere in three traditions began to build
Europe which led to the knowledge into AI applications.
development of the programming This "knowledge revolution" led
language Prolog and the science to the development and
of logic programming. deployment of expert systems
"Anti-logic" or "scruffy" (introduced by Edward
Feigenbaum), the first truly Statistical
successful form of AI software.
The knowledge revolution was In the 1990s, AI researchers developed

also driven by the realization that sophisticated mathematical tools to solve

enormous amounts of knowledge specific subproblems. These tools are

would be required by many truly scientific, in the sense that their

simple AI applications. results are both measurable and


verifiable, and they have been
Sub-symbolic responsible for many of AI's recent
successes. The shared mathematical
During the 1960s, symbolic approaches
language has also permitted a high level
had achieved great success at simulating
of collaboration with more established
high-level thinking in small
fields (like mathematics, economics or
demonstration programs. Approaches
operations research). Stuart Russell and
based on cybernetics or neural networks
Peter Norvig describe this movement as
were abandoned or pushed into the
nothing less than a "revolution" and "the
background. By the 1980s, however,
victory of the neats."
progress in symbolic AI seemed to stall
and many believed that symbolic
Tools
systems would never be able to imitate
all the processes of human cognition, In the course of 50 years of research, AI
especially perception, robotics, learning has developed a large number of tools to
and pattern recognition. A number of solve the most difficult problems in
researchers began to look into "sub- computer science. A few of the most
symbolic" approaches to specific AI general of these methods are discussed
problems. below.

1. Bottom-up, embodied, situated,


Search and optimization
behavior-based or nouvelle AI
2. Computational Intelligence Many problems in AI can be solved in
theory by intelligently searching through
many possible solutions: Reasoning can
be reduced to performing a search. For Logic
example, logical proof can be viewed as
searching for a path that leads from Logic is used for knowledge

premises to conclusions, where each step representation and problem solving, but

is the application of an inference rule. it can be applied to other problems as

Planning algorithms search through trees well. For example, the satplan algorithm

of goals and subgoals, attempting to find uses logic for planning and inductive

a path to a target goal, a process called logic programming is a method for

means-ends analysis. Robotics learning.

algorithms for moving limbs and


Several different forms of logic are used
grasping objects use local searches in
in AI research. Propositional or
configuration space. Many learning
sentential logic is the logic of statements
algorithms use search algorithms based
which can be true or false. First-order
on optimization.
logic also allows the use of quantifiers

A very different kind of search came to and predicates, and can express facts

prominence in the 1990s, based on the about objects, their properties, and their

mathematical theory of optimization. For relations with each other. Fuzzy logic, is

many problems, it is possible to begin a version of first-order logic which

the search with some form of a guess allows the truth of a statement to be

and then refine the guess incrementally represented as a value between 0 and 1,

until no more refinements can be made. rather than simply True (1) or False (0).

These algorithms can be visualized as Fuzzy systems can be used for uncertain

blind hill climbing: we begin the search reasoning and have been widely used in

at a random point on the landscape, and modern industrial and consumer product

then, by jumps or steps, we keep moving control systems. Subjective logic models

our guess uphill, until we reach the top. uncertainty in a different and more

Other optimization algorithms are explicit manner than fuzzy-logic: a given

simulated annealing, beam search and binomial opinion satisfies belief +

random optimization. disbelief + uncertainty = 1 within a Beta


distribution. By this method, ignorance
can be distinguished from probabilistic
statements that an agent makes with high decision theory, decision analysis,
confidence. Default logics, non- information value theory.
monotonic logics and circumscription
are forms of logic designed to help with Classifiers and statistical

default reasoning and the qualification learning methods


problem. Several extensions of logic
The simplest AI applications can be
have been designed to handle specific
divided into two types: classifiers ("if
domains of knowledge, such as:
shiny then diamond") and controllers ("if
description logics; situation calculus,
shiny then pick up"). Controllers do
event calculus and fluent calculus (for
however also classify conditions before
representing events and time); causal
inferring actions, and therefore
calculus; belief calculus; and modal
classification forms a central part of
logics.
many AI systems. Classifiers are

Probabilistic methods for functions that use pattern matching to

uncertain reasoning determine a closest match. They can be


tuned according to examples, making
Many problems in AI (in reasoning, them very attractive for use in AI. These
planning, learning, perception and examples are known as observations or
robotics) require the agent to operate patterns. In supervised learning, each
with incomplete or uncertain pattern belongs to a certain predefined
information. AI researchers have devised class. A class can be seen as a decision
a number of powerful tools to solve that has to be made. All the observations
these problems using methods from combined with their class labels are
probability theory and economics. known as a data set. When a new
observation is received, that observation
A key concept from the science of
is classified based on previous
economics is "utility": a measure of how
experience.
valuable something is to an intelligent
agent. Precise mathematical tools have A classifier can be trained in various
been developed that analyze how an ways; there are many statistical and
agent can make choices and plan, using machine learning approaches. The most
widely used classifiers are the neural direction) and recurrent neural networks
network, kernel methods such as the (which allow feedback). Among the
support vector machine, k-nearest most popular feedforward networks are
neighbor algorithm, Gaussian mixture perceptrons, multi-layer perceptrons and
model, naive Bayes classifier, and radial basis networks. Among recurrent
decision tree. The performance of these networks, the most famous is the
classifiers have been compared over a Hopfield net, a form of attractor
wide range of tasks. Classifier network, which was first described by
performance depends greatly on the John Hopfield in 1982. Neural networks
characteristics of the data to be can be applied to the problem of
classified. There is no single classifier intelligent control (for robotics) or
that works best on all given problems; learning, using such techniques as
this is also referred to as the "no free Hebbian learning and competitive
lunch" theorem. Determining a suitable learning.
classifier for a given problem is still
more an art than science. Control theory

Neural networks Control theory, the grandchild of


cybernetics, has many important
The study of artificial neural networks applications, especially in robotics.
began in the decade before the field AI
research was founded, in the work of Languages

Walter Pitts and Warren McCullough.


AI researchers have developed several
Other important early researchers were
specialized languages for AI research,
Frank Rosenblatt, who invented the
including Lisp and Prolog.
perceptron and Paul Werbos who
developed the backpropagation
Evaluating progress
algorithm.

In 1950, Alan Turing proposed a general


The main categories of networks are
procedure to test the intelligence of an
acyclic or feedforward neural networks
agent now known as the Turing test.
(where the signal passes in only one
This procedure allows almost all the diagnosis, robot control, law, scientific
major problems of artificial intelligence discovery and toys. However, many AI
to be tested. However, it is a very applications are not perceived as AI: "A
difficult challenge and at present all lot of cutting edge AI has filtered into
agents fail. general applications, often without being
called AI because once something
Artificial intelligence can also be
becomes useful enough and common
evaluated on specific problems such as
enough it's not labeled AI anymore."
small problems in chemistry, hand-
"Many thousands of AI applications are
writing recognition and game-playing.
deeply embedded in the infrastructure of
Such tests have been termed subject
every industry." In the late 90s and early
matter expert Turing tests. Smaller
21st century, AI technology became
problems provide more achievable goals
widely used as elements of larger
and there are an ever-increasing number
systems, but the field is rarely credited
of positive results.
for these successes.

The broad classes of outcome for an AI


test are:

• Optimal: it is not possible to


perform better
• Strong super-human: performs
better than all humans
• Super-human: performs better
than most humans
• Sub-human: performs worse than
most humans
CONCLUSION:
Applications
In order to maintain their
Artificial intelligence has been used in a competitiveness, companies feel
wide range of fields including medical compelled to adopt productivity
increasing measures. Yet, they cannot other, their joint contribution can be of
relinquish the flexibility their production unquestionable value in order to
cycles need in order to improve their understand a little better the importance
response, and thus, their positioning in of ES within the production system
the market. To achieve this, companies
must combine these two seemingly
opposed principles. Thanks to new
technological advances, this combination
is already a working reality in some
companies. It is made possible today by
the implementation of computer References
integrated manufacturing (CIM) and
• Russell, Stuart J.; Norvig, Peter
artificial intelligence (AI) techniques, (2003), Artificial Intelligence: A
fundamentally by means of expert Modern Approach (2nd ed.),
Upper Saddle River, New Jersey:
systems (ES) and robotics. Depending Prentice Hall, ISBN 0-13-
on how these (AI/CIM) techniques 790395-2,
http://aima.cs.berkeley.edu/
contribute to automation, their • Kurtzweil, Ray (2005), The
immediate effects are an increase in singularity is near : when
humans transcend biology, New
productivity and cost reductions. Yet York: Viking,
also, the system's flexibility allows for ISBN 9780670033843

easier adaptation and, as a result, an


increased ability to generate value, in
other words, competitiveness is
improved. The authors have analyzed
three studies to identify the possible
benefits or advantages, as well as the
inconveniences, that this type of
technique may bring to companies,
specifically in the production field.
Although the scope of the studies and
their approach differ from one to the

You might also like