Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Agent: In 

artificial intelligence, an intelligent agent (IA) refers to an autonomous entity


which acts, directing its activity towards achieving goals (i.e. it is an agent), upon an
environment using observation through sensors and consequent actuators.

Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are
programmed to think like humans and mimic their actions. The term may also be applied to any
machine that exhibits traits associated with a human mind such as learning and problem-
solving.

Agent function: This is a function in which actions are mapped from a certain percept
sequence. Percept sequence refers to a history of what the intelligent agent has
perceived. Agent program: This is an implementation or execution of the agent
function.

State space search is a process used in the field of computer science,


including artificial intelligence (AI), in which successive configurations or states of an
instance are considered, with the intention of finding a goal state with a desired
property.

State space search is a process used in the field of computer science,


including artificial intelligence (AI), in which successive configurations or states of an
instance are considered, with the intention of finding a goal state with a desired
property.

Turing has tackled the problem of machine intelligence. Computers cannot think, so
he investigated and considered the possibility of artificial intelligence for many years.

Objections:

• Objections of Turing carry some weight today also like theoretical


objection and theological objection because they are not real and it is programmed.

The commonly used objections are:

• Consciousness objection

• This objection leads that machines can’t feel emotions in the same way as humans.

• Lady Lovelaces objection

• This objection leads that machines can only do what they are programmed.

The other objections are:

• The chimpanzee’s objection:

• According to the first objection, the test is too conservative. Few would deny that
chimpanzees can think, yet no chimpanzee can pass the Turing test. If thinking animals
could fail, then presumably a thinking computer also can also fail the Turing test.
• The sense organs objection:

• This test focused on the computer’s ability to make verbal responses. It doesn’t
respond to the objects that are seen and touched like a human does.

• Simulation objection:

• Suppose a computer passes the Turing test. How can we say that it thinks? Success
in the test means only that it has shown simulation of thinking.

• The black box objection:

• The black box is a device whose inner workings are allowed to be a mystery. The
computer involved is treated as a black box. The judgment of whether a computer thinks
or not is based on outward behaviour.

Predictions:

The probability shows how the unskilled interrogator fools the skilled interrogator
is.

• In 2002, one entrant fools the one judge in the Loebner prize competition.

• It is hard to imagine what that judge was thinking, although if look at the transcript.

• Some of the examples are chatbot or other online agents fooling the humans.

For example, Julia chatbot with the Lenny Foner’s account:

foner.www.media.mit.edu/people/foner/Julia/.

• We did say that the variation in the skill of the interrogator depending more rather than
on the program, the chance today is nearly 10%.

• In 50 years, to create credible impersonators, the entertainment industry made


sufficient investments in artificial actors.

Based on the computer performances nowadays, it will have almost 90% chances
to clear a five-minute test of Turing

An agent that senses only partial information about the state cannot be perfectly
rational.
False. Perfect rationality refers to the ability to make good decisions given the sensor
information received.
There exist task environments in which no pure reflex agent can behave rationally.
True. A pure reflex agent ignores previous percepts, so cannot obtain an optimal state
estimate in a partially observable environment. For example, correspondence chess is
played by sending moves; if the other player's move is the current percept, a reflex
agent could not keep track of the board state and would have to respond to, say, "a4" in
the same way regardless of the position in which it was played.
There exists a task environment in which every agent is rational.
True. For example, in an environment with a single state, such that all actions have the
same reward, it doesn't matter which action is taken. More generally, any environment
that is reward-invariant under permutation of the actions will satisfy this property
The input to an agent program is the same as the input to the agent function.
False. The agent function, notionally speaking, takes as input the entire percept
sequence up to that point, whereas the agent program takes the current percept only.
Every agent function is implementable by some program/machine combination.
False. For example, the environment may contain Turing machines and input tapes and
the agent's job is to solve the halting problem; there is an agent function that specifies
the right answers, but no agent program can implement it. Another example would be
an agent function that requires solving intractable problem instances of arbitrary size in
constant time.
a. Formulate the problem precisely, making only those distinctions necessary to ensure a valid
solution. Draw a diagram of the complete state space.
b. Implement and solve the problem optimally using an appropriate search algorithm. Is it a good
idea to check for repeated states?
c. Why do you think people have a hard time solving this puzzle, given that the state space is so
simple?

3.9
a.
b. It is a good idea to check for repeated states because if one is encountered and continues it will
loop back to the initial state. But since the search space is so small, we can use an optimal search.

(a) Class scheduling: There is a fixed number of professors and classrooms, a list of classes to be
offered, and a list of possible time slots for classes. Each professor has a set of classes that he or
she can teach. Answer: The four variables in this problem are: Teachers, Subjects, Classrooms
and Time slots. We can use two constraint matrices, Tij and Sij . Tij represents a teacher in
classroom i at time j. Sij represents a subject being taught in classroom i at time j. The domain of
each Tij variable is the set of teachers. The domain of each Sij variable is the set of subjects.
Let’s denote by D(t) the set of subjects that teacher named t can teach. The constraints are: Tij 6=
Tkj k 6= i which enforces that no teacher is assigned to two classes which take place at the same
time. There is a constraint between every Sij and Tij , denoted Cij (t, s) that ensured that if
teacher t is assigned to Tij , then Sij is assigned a value from D(t). An example for the constraint
C is C(Tij , Sij ) = {(Dechter, 6a),(Dechter, 171),(Dechter, 175a),(Smyth, 171),(Smyth, 278),
(Irani, 6a), . . .} In general C(Tij , Sij ) = {(t, s)| teacher t can teach subject s}

You might also like