Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

HISTORY AND PHILOSPHY OF

ARTIFICIAL INTELLIGENCE
AARISHTI SINGH

A3221519155

BBA.LLB(H)

2019-2024
The idea of inanimate objects returning to life as intelligent beings has been
around for a protracted time. the traditional Greeks had myths regarding robots,
and Chinese and Egyptian engineers engineered automatons.

The beginnings of contemporary AI is derived to classical philosophers triying


to explain human thinking as a symbolic system. However the sector of AI
wasn't formally supported till 1956, at a conference at Dartmouth College, in
Hanover, New Hampshire, wherever the term "artificial intelligence" was
coined.

In the early days of AI, computer scientists tried to recreate aspects of the
human mind within the computer. this is often the kind of intelligence that's the
things of science fiction—machines that think, more or less, like us. this kind of
intelligence is termed, unsurprisingly, intelligibility. A pc with intelligibility
may be used to explore how we tend to reason, learn, judge, perceive, and
execute mental actions. Early analysis on intelligibility centered on modeling
elements of the real world and also the mind (from the realm of cognitive
scientists) within the computer. it's exceptional once you contemplate that these
experiments transpired nearly sixty years ago.
Early models of intelligence centered on deduction to attain conclusions. one
among the earliest and best acknowledged A.I. programs of this kind was the
Logic theorist, written in 1956 to mimic the problem-solving skills of a human
being. The Logic theorist shortly verified thirty eight of the primary 52}
theorems in chapter two of the Principia Mathematica, truly improving one
theorem within the process. For the first time, it had been clearly demonstrated
that a machine may perform tasks that, till this time, were thought of to need
intelligence and creative thinking.

Soon analysis turned toward a special style of thinking, inductive reasoning.


inductive reasoning is what a scientist uses when examining information and
attempting to come back up with a hypothesis to clarify it. to review inductive
reasoning, researchers created a cognitive model based on the scientists
operating in a NASA laboratory, helping them to spot organic molecules
exploiting their knowledge of chemistry. The Dendral program was the primary
real example of the second feature of AI, instrumentality, a set of techniques or
algorithms to accomplish an inductive reasoning task, during this case molecule
identification. Dendral was distinctive because it additionally included the first
knowledge domain, a group of if/then rules that captured the knowledge of the
scientists, to use alongside the cognitive model. this kind of data would later be
known as an expert system. Having each types of “intelligence” available in a
single program allowed computer scientists to raise, “What makes certain
scientists so much better than others? Do they need superior cognitive skills, or
larger knowledge?”By the late 1960’s the solution was clear. The performance
of Dendral was nearly completely a function of the quantity and quality of data
obtained from the specialists. The cognitive model was solely weakly associated
with enhancements in performance.This realization led to a serious paradigm
shift within the AI community. knowledge engineering emerged as a discipline
to model specific domains of human expertise using expert systems. and
therefore the expert systems they created typically exceeded the performance of
any single human chief. This outstanding success sparked great enthusiasm for
expert systems among the artificial Intelligence Community, the military,
industry, investors, and therefore the fashionable press.

As expert systems became commercially thriving, researchers turned their


attention to techniques for modeling these systems and creating them a lot of
versatile across drawback domains. it was during this period that object-oriented
design and stratified ontologies were developed by the AI community and
adopted by different parts of the computer community. nowadays stratified
ontologies are at the center of knowledge graphs, that have seen a revitalization
in recent years.

As researchers settled on a type of knowledge representation referred to as


“production rules,” a type of initial order predicate logic, they found that the
systems may learn automatically; i.e., the systems coud write or rewrite the
principles themselves to enhance performance based on further knowledge.
Dendral was changed and given the ability to be told the principles of mass
spectrometry based on the empirical information from experiments.

As good as these expert systems were, they did have limitations. They were
usually restricted to a selected drawback domain, and will not distinguish from
multiple plausible alternatives or utilize information regarding structure or
statistical correlation. to deal with a number of these problems, researchers
superimposed certainty factorsors—numerical values that indicated how
probably a selected fact is true.

The start of the second paradigm shift in AI occurred once researchers realised
that certainty factors may be wrapped into statistical models. Statistics and
bayesian reasoning expertise accustomed model domain experience from the
empirical information. From this time forward, AI would be more and more
dominated by machine learning.

PHILOSPHY OF AI
The object of research in artificial intelligence (AI) is to find a way to program a
computer to perform the exceptional functions that conjure human intelligence.
the target of analysis in artificial intelligence (AI) is to find a way to program a
computer to program computer to perform the exceptional functions that
conjure to human intelligence. The introductory of this space was to develop
such innovative machine that thinks like a mortal and this concept was
essentially called as the philosophy of artificial intelligence. Different
philosophers have their different read on artificial intelligence

Russel and Norvig believed that AI is essentially the study of some intelligent
agents that receives percepts from the atmosphere and perform actions. every
such agent implements a function that maps percept sequences to actions and
that we cover alternative ways to represents these functions.

According to Haugeland AI is that the exciting new effort to create computers


assume, or essentially machines with minds in the full and literal sense. For
bellman it was the automation of activities that we tend to accompany with
human thinking, activities drawback downside resolution, learning or reasoning.
thus basically they have explained a machine that's fully related to human
thinking. that's to say those machines do assume.

In overall Artificial Intelligent machines may be categorised as four classes

 Systems that think like humans


 Systems that act like humans
 Systems that think rationally
 Systems that acts rationally

Acting Humanly: Turing machine Approach the Turing test, as named after
Alan Turing, was designed to produce a satisfactory operational definition of
intelligence. Turing outlined intelligent behavior as the ability to attain human-
level performance altogether in all tasks to fool an questioner. The take a look at
shows that machines can interact with mortals the means human interact
amongst their atmosphere.

Thinking Humanly: The cognitive Modeling Approach The knowledge base


field of cognitive science brings together computer models from AI and
experimental techniques from cognitive psychology to undertake to construct
precise and testable theories of the workings of the human mind. And if we say
that a given program thinks like a individual, we tend to must have a way of
deciding how mortals think. For that, we need to get within the actual workings
of the human mind. One has to determine how mortals think so as to see
whether or not a program thinks like a individual or not.

Thinking Rationally: Right thinking is the inferential character of each


reasoning method. Laws of thoughts play an important role as a result of these
laws provide right clarification of a syllogistic inference. There are 3 Laws of
Thought recognized by the logicians. These have historically been referred to as
the law of Identity, the law of Contradiction, and the law of Excluded-Middle.
These Laws of Thought are applicable to totally different contexts.

The Chinese area argument: the concept that intelligence is same as intelligent
behavior has been challenged by some. the most effective known counter-
argument is John Searle’s Chinese room thought experiment. Searle describes
an experiment where an individual who doesn't understand Chinese is lock in a
room. Outside the room is the one that will slip notes written in Chinese within
the room through a slot. The person within an enormous is given an enormous
manual where she will find elaborate instructions for responding to the notes
she receives from outside.

Searle argued that even if the person outside the room gets the impression that
he's in a conversation with another Chinese speaking person the person within
the room doesn't understand Chinese. Likewise, his argument continues even
though a machine behaves in an intelligent manner. The word intelligent may
get replaced by the word acutely aware and an identical argument will be
created.

needless to mention, Philosophy of AI nowadays involves far more than these


mentioned above, and, inevitably, Philosophy of AI tomorrow can embody new
debates and issues we can’t see currently. as a result of machines, inevitably,
will get smarter and smarter (regardless of just how smart they get), Philosophy
of AI, pure and straightforward, is a growth industry. With every act that
machines match, the “big” queries will solely attract more attention.

Bibliography
Maad M. Mijwil

Rajakishore Nath

Bruce G. Buchanan

Chris Smith

You might also like