Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Name:Abdulrahman Bajsauir

ID:62130320
Q1: For each of the following assertions, say whether it is true or false and support your
answer with examples or counterexamples where appropriate.

A. An agent that senses only partial information about the state cannot be perfectly
rational.
False. Limited perception doesn't negate rationality. Even with incomplete
information, agents can use reasoning and inference to make sound choices
based on what they sense. Imagine a self-driving car with limited sensor range. It
can still navigate rationally by following traffic rules and avoiding immediate
obstacles despite not seeing the entire road ahead.

B. There exist task environments in which no pure reflex an agent can behave
rationally.

True. Pure reflex agents solely react to current inputs without considering past
experiences or future consequences. In dynamic environments, this can lead to
irrational behavior. A chess-playing agent that reacts only to the current board
configuration wouldn't be able to plan ahead or strategize, hindering rational
play.

C. There exists a task environment in which every agent is rational.

False. It is not necessary for every agent to be rational in every task environment.
Rationality depends on the agent's ability to make optimal decisions based on the
available information and goals. Different agents may have different capabilities,
knowledge, and objectives, which can affect their rationality in specific task
environments. For example, in a competitive game, some agents may prioritize
winning at all costs, while others may prioritize fairness or cooperation.
Therefore, rationality is subjective and context-dependent, and not all agents may
be rational in every task environment.
Q2: For each of the following activities, give a PEAS description of the task and
environment type.

Playing Soccer

● Performance: Win the game, score goals, exhibit teamwork, and maintain fair
play.
● Environment: Soccer field with boundaries, goals, ball, teammates, opponents,
referee, and spectators.
● Actuators: Legs for running, kicking, and dribbling; arms for throwing/catching;
voice for communication.
● Sensors: Eyes for perceiving field, ball, teammates, and opponents; ears for
instructions/calls; skin for ball/opponent contact.

Shopping for Used AI Books Online

● Performance: Find and purchase desired AI books at a reasonable price.


● Environment: Online marketplace with search filters, book listings, prices, seller
ratings, and payment options.
● Actuators: Mouse/touchpad for navigation, clicking listings, adding to cart,
checking out.
● Sensors: Eyes for reading descriptions, prices, and ratings; ears for audio
descriptions/reviews; keyboard for search queries and payment details.

Practicing Tennis Against a Wall

● Performance: Improve tennis skills, accuracy, and consistency in hitting the ball.
● Environment: Tennis court with wall, racket, and ball.
● Actuators: Tennis racket for hitting the ball against the wall.
● Sensors: Eyes for tracking ball trajectory; ears for ball hitting the wall sound; skin
for racket-ball impact.

Performing a High Jump

● Performance: Clear the highest possible height in the high jump event.
● Environment: Athletics track with high jump pit, landing mat, competitors, and
officials.
● Actuators: Legs for running, jumping, and clearing the bar.
● Sensors: Eyes for perceiving bar height and position; ears for instructions/signals;
skin for bar/mat contact.

Bidding on an Item at an Auction

● Performance: Win the auction at the lowest possible price (within budget).
● Environment: Auction platform with listings, bidding history, current bids, and
time remaining.
● Actuators: Mouse/touchpad for placing and increasing bids.
● Sensors: Eyes for reading item descriptions, current bids, and time; ears for
auctioneer announcements/bid increments.

Q3: Define in your own words the following terms: agent, agent function, agent
program,
reflex agent, model-based agent, goal-based agent, utility-based agent, and learning
agent.
1. Agent: An entity that can perceive its environment and take actions that affect it.
2. Agent Function: A mapping that translates a sequence of perceptions into
actions.
3. Agent Program: The specific implementation of the agent function that
determines its behavior.
4. Reflex Agent: An agent that chooses actions based solely on the current
perception.
5. Model-Based Agent: An agent that maintains an internal model of the
environment to guide its actions.
6. Goal-Based Agent: An agent that operates by setting and actively pursuing
specific goals.
7. Utility-Based Agent: An agent that makes decisions based on the anticipated
value or desirability of different outcomes.
8. Learning Agent: An agent that can improve its performance over time by learning
from experience.

You might also like