Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 27

Subject :- robotics and intelligent system

Sessional Paper Solution

Que 1
A) list the categories of sensory perception that can be incorporated in
robots.explain any four of them in brief.
Ans:- 1) Light Sensor
2) Proximity Sensor
3) Sound Sensor
4) Temperature Sensor
5) Acceleration Sensor
1) Light Sensor
Light sensor is a transducer used for detecting light and creates a voltage
Difference equivalent to the light intensity fall on a light sensor.
The two main light sensors used in robots are Photovoltaic cells and Photo
Resistor. Other kind of light sensors like phototransistors, phototubes are rarely
Used.
The type of light sensors used in robotics are:
Photo resistor - It is a type of resistor used for detecting the light. In photo
Resistor resistance varies with change in light intensity. The light falls on photo
Resistor is inversely proportional to the resistance of the photo resistor. In general
Photo resistor is also called as Light Dependent Resistor (LDR).
Photovoltaic Cells - Photovoltaic cells are energy conversion device used to
Convert solar radiation into electrical electric energy. It is used if we are planning
To build a solar robot. Individually photovoltaic cells are considered as an energy
Source, an implementation combined with capacitors and transistors can convert
This into a sensor.
2) Proximity Sensor
Proximity sensor can detect the presence of nearby object without any physicalcontact. The
working of a proximity sensor is simple. In proximity sensortransmitter transmits an
electromagnetic radiation and receiver receives a Analyzes the return signal for interruptions.
Therefore the amount of light receiver
Receives by surrounding can be used for detecting the presence of nearby object.
Consider the types of proximity sensors used in robotics are:-
Infrared (IR) Transceivers - In IR sensor LED transmit the beam of IR light and
If it find an obstacle then the light is reflected back which is captured by an IR receiver.
Ultrasonic Sensor - In ultrasonic sensors high frequency sound waves is
Generated by transmitter, the received echo pulse suggests an object interruption.
In general ultrasonic sensors are used for distance measurement in robotic system.
3) Sound Sensor
Sound sensors are generally a microphone used to detect sound and return a
Voltage equivalent to the sound level. Using sound sensor a simple robot can be
Designed to navigate based on the sound receives.
Implementation of sound sensors is not easy as light sensors because it generates a
Very small voltage difference which will be amplified to generate measurable
Voltage change.
4) Temperature Sensor
Temperature sensors are used for sensing the change in temperature of the
Surrounding. It is based on the principle of change in voltage difference for a
Change in temperature this change in voltage will provide the equivalent
Temperature value of the surrounding.
Few generally used temperature sensors ICS are TMP35, TMP37, LM34, LM35,
etc.

B) explain any four types of robots with an example.


Ans :- 1) Mobile Robots
Mobile robots are able to move from one location to another location usingLocomotion. It is
an automatic machine that is capable of navigating an uncontrolled environment without any
requirement of physical and electromechanical guidance devices. Mobile Robots are of two
types:
(a) Rolling robots - Rolling robots require wheels to move around. They can
Easily and quickly search. But they are only useful in flat areas.
(b) Walking robots - Robots with legs are usually used in condition where the
Terrain is rocky. Most walking robots have at least 4 legs.
2) Industrial Robots
Industrial robots perform same tasks repeatedly without ever moving. These robots
Are working in industries in which there is requirement of performing dull and
Repeated tasks suitable for robot.
An industrial robot never tired, it will perform their works day and night without
Ever complaining.
3) Autonomous Robots
Autonomous robots are self-supported. They use a program that provides them the
Opportunity to decide the action to perform depending on their surroundings.
Using artificial intelligence these robots often learn new behavior. They start with
A short routine and adapt this routine to be more successful in a task they perform.
Hence, the most successful routine will be repeated.
4) Remote Controlled Robots
Remote controlled robot used for performing complicated and undetermined tasks
That autonomous robot cannot perform due to uncertainty of operation.
Complicated tasks are best performed by human beings with real brainpower.
Therefore a person can guide a robot by using remote. Using remote controlled
Operation human can perform dangerous tasks without being at the spot where the
Tasks are performed.

Que 2
A) state an explain laws of robotics.
Ans :- The word robot was firstly introduced to public by Czech writer Karel Capek in
His play Rossum's Universal Robots (R.U.R), published in 1920. The play begins
With a factory that makes artificial people known as robots.
The word "Robotics", was coined accidentally by the Russian-born, American
Scientist, Issac Asimov in 1940s .
The three laws of Robotics:
Issac Asimov also proposed his three "Laws of Robotics", and he later added a
"zeroth law"
● Zeroth Law - A robot is not allowed to injured humanity, or, through
Inaction it allows humanity to come to harm.
● First Law - A robot can not injure a human being, or, through inaction it
Allows a human being to come to harm, unless it would violate the higher order
Law.
● Second Law - A robot should follow the orders given it by human beings,
Except when such orders give by humans would conflict with a higher order law.
● Third Law - A robot is allowed to protect its own existence as long as such
Protection would not conflict with a higher order law.

B) explain PEAS in artificial intelligence with suitable example.


Ans :- We know that there are different types of agents in AI. PEAS System is used to
Categorize similar agents together. The PEAS system delivers the performance
Measure with respect to the environment, actuators, and sensors of the respective
Agent. Most of the highest performing agents are Rational Agents.
Rational Agent: The rational agent considers all possibilities and chooses to
Perform a highly efficient action. For example, it chooses the shortest path with
Low cost for high efficiency. PEAS stands for a Performance measure,
Environment, Actuator, Sensor.
1. Performance Measure: Performance measure is the unit to define the
Success of an agent. Performance varies with agents based on their different
Precepts.
2. Environment: Environment is the surrounding of an agent at every instant.
It keeps changing with time if the agent is set in motion. There are 5 major types
Of environments:
○ Fully Observable & Partially Observable
○ Episodic & Sequential
○ Static & Dynamic
○ Discrete & Continuous
○ Deterministic & Stochastic
3. Actuator: An actuator is a part of the agent that delivers the output of
Action to the environment.
4. Sensor: Sensors are the receptive parts of an agent that takes in the input
For the agent.

Que 3
A) describe minimax algorithm with suitable example.
Ans :- Mini-max algorithm is a recursive or backtracking algorithm which is used
In decision-making and game theory. It provides an optimal move for the player
Assuming that opponent is also playing optimally.
● Mini-Max algorithm uses recursion to search through the game-tree.
● Min-Max algorithm is mostly used for game playing in AI. Such as Chess,
Checkers, tic-tac-toe, go, and various tow-players game. This Algorithm computes
The minimax decision for the current state.
● In this algorithm two players play the game, one is called MAX and other is
Called MIN.
● Both the players fight it as the opponent player gets the minimum benefit
While they get the maximum benefit.
● Both Players of the game are opponent of each other, where MAX will
Select the maximized value and MIN will select the minimized value.
● The minimax algorithm performs a depth-first search algorithm for the
Exploration of the complete game tree.
● The minimax algorithm proceeds all the way down to the terminal node of
The tree, then backtrack the tree as the recursion.
Pseudo-code for minmax Algorithm:
1. Function minimax(node, depth, maximizingplayer) is
2. If depth ==0 or node is a terminal node then
3. Return static evaluation of node
4.
5. If maximizingplayer then // for Maximizer Player
6. Maxeva= -infinity
7. For each child of node do
8. Eva= minimax(child, depth-1, false)
9. Maxeva= max(maxeva,eva) //gives Maximum of the values
10. Return maxeva
11.
12. Else // for Minimizer player
13. Mineva= +infinity
14. For each child of node do
15. Eva= minimax(child, depth-1, true)
16. Mineva= min(mineva, eva) //gives minimum of the values
17. Return mineva
Initial call:
Minimax(node, 3, true)
Working of Min-Max Algorithm:
● The working of the minimax algorithm can be easily described using an
Example. Below we have taken an example of game-tree which is representing the
Two-player game.
● In this example, there are two players one is called Maximizer and other is
Called Minimizer.
● Maximizer will try to get the Maximum possible score, and Minimizer will
Try to get the minimum possible score.
● This algorithm applies DFS, so in this game-tree, we have to go all the way
Through the leaves to reach the terminal nodes.
● At the terminal node, the terminal values are given so we will compare
Those value and backtrack the tree until the initial state occurs. Following are the
Main steps involved in solving the two-player game tree:
Step-1: In the first step, the algorithm generates the entire game-tree and apply the
Utility function to get the utility values for the terminal states. In the below tree
Diagram, let's take A is the initial state of the tree. Suppose maximizer takes first
Turn which has worst-case initial value =- infinity, and minimizer will take next
Turn which has worst-case initial value = +infinity.

Step 2: Now, first we find the utilities value for the Maximizer, its initial value is

-∞, so we will compare each value in terminal state with initial value of
Maximizer and determines the higher nodes values. It will find the maximum
Among the all.
● For node D max(-1,- -∞) => max(-1,4)= 4
● For Node E max(2, -∞) => max(2, 6)= 6
● For Node F max(-3, -∞) => max(-3,-5) = -3
● For node G max(0, -∞) = max(0, 7) = 7

Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value
With +∞, and will find the 3
Rd
Layer node values.

● For node B= min(4,6) = 4


● For node C= min (-3, 7) = -3

Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all
Nodes value and find the maximum value for the root node. In this game tree, there
Are only 4 layers, hence we reach immediately to the root node, but in real games,
There will be more than 4 layers.
● For node A max(4, -3)= 4

That was the complete workflow of the minimax two player game.
Properties of Mini-Max algorithm:
● Complete- Min-Max algorithm is Complete. It will definitely find a
Solution (if exist), in the finite search tree.
● Optimal- Min-Max algorithm is optimal if both opponents are playing
Optimally.
● Time complexity- As it performs DFS for the game-tree, so the time
Complexity of Min-Max algorithm is O(b
M
), where b is branching factor of the

Game-tree, and m is the maximum depth of the tree.


● Space Complexity- Space complexity of Mini-max algorithm is also
Similar to DFS which is O(bm).

B) Prove that i) A ↔ B ≡ (A Λ B)V(~A Λ ~B).


Ii) A̅ V B ≡ A → B

Ans :- First, we need to create a truth table with columns for A, B, A ∧ B, ¬A, ¬B, ¬A ∧
¬B, (A ∧ B) ∨ (¬A ∧ ¬B), ~A ∧ ~B, and (A ∧ B) V (~A ∧ ~B).

A B A ∧ B ¬A ¬B ¬A ∧ ¬B (A ∧ B) ∨ (¬A ∧ ¬B) ~A ∧ ~B (A ∧ B) V (~A ∧ ~B)

T T T F F F T F T

T F F F T F F F F

F T F T F F F F F
A B A ∧ B ¬A ¬B ¬A ∧ ¬B (A ∧ B) ∨ (¬A ∧ ¬B) ~A ∧ ~B (A ∧ B) V (~A ∧ ~B)

F F F T T T T T T

In the truth table, we first fill in the columns for A, B, A ∧ B, ¬A, and ¬B according to the
truth values of A and B. Next, we fill in the column for ¬A ∧ ¬B by taking the logical
conjunction of the ¬A and ¬B columns. First, we need to create a truth table with columns for
A, B, A ∧ B, ¬A, ¬B, ¬A ∧ ¬B, (A ∧ B) ∨ (¬A ∧ ¬B), ~A ∧ ~B, and (A ∧ B) V (~A ∧ ~B).

Ii) ) A̅ V B ≡ A → B
To show that A̅ V B is logically equivalent to A → B, we can use a truth table:

A B A̅ A̅ V B A→B

T T F T T

T F F F F

F T T T T

F F T T T

In the truth table, we first fill in the columns for A and B according to their truth values.
Next, we fill in the column for A̅ by taking the negation of A.

Que 4
A) describe α - β Pruning with suitable algorithm.
Ans :- ● Alpha-beta pruning is a modified version of the minimax algorithm. It is an
Optimization technique for the minimax algorithm.
● As we have seen in the minimax search algorithm that the number of game
States it has to examine are exponential in depth of the tree. Since we cannot
Eliminate the exponent, but we can cut it to half. Hence there is a technique by
Which without checking each node of the game tree we can compute the correct
Minimax decision, and this technique is called pruning. This involves two
Threshold parameter Alpha and beta for future expansion, so it is called alpha-beta
Pruning. It is also called as Alpha-Beta Algorithm.
● Alpha-beta pruning can be applied at any depth of a tree, and sometimes it
Not only prune the tree leaves but also entire sub-tree.
● The two-parameter can be defined as:
A. Alpha: The best (highest-value) choice we have found so far at any point
Along the path of Maximizer. The initial value of alpha is -∞.
B. Beta: The best (lowest-value) choice we have found so far at any point
Along the path of Minimizer. The initial value of beta is +∞.
● The Alpha-beta pruning to a standard minimax algorithm returns the same
Move as the standard algorithm does, but it removes all the nodes which are not
Really affecting the final decision but making algorithm slow. Hence by pruning
These nodes, it makes the algorithm fast.
Pseudo-code for Alpha-beta Pruning:
1. Function minimax(node, depth, alpha, beta, maximizingplayer) is
2. If depth ==0 or node is a terminal node then
3. Return static evaluation of node
4.

5. If maximizingplayer then // for Maximizer Player


6. Maxeva= -infinity
7. For each child of node do
8. Eva= minimax(child, depth-1, alpha, beta, False)
9. Maxeva= max(maxeva, eva)
10. Alpha= max(alpha, maxeva)
11. If beta<=alpha
12. Break
13. Return maxeva
14.
15. Else // for Minimizer player
16. Mineva= +infinity
17. For each child of node do
18. Eva= minimax(child, depth-1, alpha, beta, true)
19. Mineva= min(mineva, eva)
20. Beta= min(beta, eva)
21. If beta<=alpha
22. Break
23. Return mineva
Step 1: At the first step the, Max player will start first move from node A where
Α= -∞ and β= +∞, these value of alpha and beta passed down to node B where
Again α= -∞ and β= +∞, and Node B passes the same value to its child D.

Step 2: At Node D, the value of α will be calculated as its turn for Max. The value
Of α is compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value
Of α at node D and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as
This is a turn of Min, Now β= +∞, will compare with the available subsequent
Nodes value, i.e. Min (∞, 3) = 3, hence at node B now α= -∞, and β= 3. In the next step,
algorithm traverse the next successor of Node B which is node E,
And the values of α= -∞, and β= 3 will also be passed.

Step 4: At node E, Max will take its turn, and the value of alpha will change. The
Current value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node
E α= 5 and β= 3, where α>=β, so the right successor of E will be pruned, and
Algorithm will not traverse it, and the value at node E will be 5.

Step 5: At next step, algorithm again backtrack the tree, from node B to node A.
At node A, the value of alpha will be changed the maximum available value is 3 as
Max (-∞, 3)= 3, and β= +∞, these two values now passes to right successor of A
Which is Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0,
And max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3
Still α remains 3, but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the
Value of beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C,
Α=3 and β= 1, and again it satisfies the condition α>=β, so the next child of C
Which is G will be pruned, and the algorithm will not compute the entire sub-tree
G.

Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) =
3. Following is the final game tree which is the showing the nodes which are
Computed and nodes which has never computed. Hence the optimal value for the
Maximizer is 3 for this example.

B) explain knowledge representation using a suitable diagram.


Ans :- Knowledge representation and reasoning (KR, KRR) is the part of Artificial
Intelligence which concerned with AI agents thinking and how thinking
Contributes to intelligent behavior of agents.
● It is responsible for representing information about the real world so that a
Computer can understand and can utilize this knowledge to solve the complex real
World problems such as diagnosis a medical condition or communicating with
Humans in natural language.
● It is also a way which describes how we can represent knowledge in
Artificial intelligence. Knowledge representation is not just storing data into some
Database, but it also enables an intelligent machine to learn from that knowledge
And experiences so that it can behave intelligently like a human.
Following are the kind of knowledge which needs to be represented in AI systems:
● Object: All the facts about objects in our world domain. E.g., Guitars
Contains strings, trumpets are brass instruments.
● Events: Events are the actions which occur in our world.
● Performance: It describe behavior which involves knowledge about how
To do things.
● Meta-knowledge: It is knowledge about what we know.
● Facts: Facts are the truths about the real world and what we represent.
● Knowledge-Base: The central component of the knowledge-based agents is

The knowledge base. It is represented as KB. The Knowledgebase is a group of the


Sentences (Here, sentences are used as a technical term and not identical with the
English language).
Knowledge: Knowledge is awareness or familiarity gained by experiences of
Facts, data, and situations.
Following are the types of knowledge in artificial intelligence:

Following are the kind of knowledge which needs to be represented in AI systems:


● Object: All the facts about objects in our world domain. E.g., Guitars
Contains strings, trumpets are brass instruments.
● Events: Events are the actions which occur in our world.
● Performance: It describe behavior which involves knowledge about how
To do things.
● Meta-knowledge: It is knowledge about what we know.
● Facts: Facts are the truths about the real world and what we represent.
● Knowledge-Base: The central component of the knowledge-based agents is
The knowledge base. It is represented as KB. The Knowledgebase is a group of the
Sentences (Here, sentences are used as a technical term and not identical with the
English language).
Knowledge: Knowledge is awareness or familiarity gained by experiences of
Facts, data, and situations. Following are the types of knowledge in artificial
Intelligence.

Que 5
A)  Explain Backward State Space Planning (BSSP) in detail.
Ans :- BSSP behaves similarly to backward state-space search. In this, we move from the
Target state g to the sub-goal g, tracing the previous action to achieve that goal.
This process is called regression (going back to the previous goal or sub-goal).
These sub-goals should also be checked for consistency. The action should be
Relevant in this case.
● Disadvantages: not sound algorithm (sometimes inconsistency can be
Found)
● Advantage: Small branching factor (much smaller than FSSP)
So for an efficient planning system, we need to combine the features of FSSP and
BSSP, which gives rise to target stack planning which will be discussed in the next
Article.

B) demonstrate planning with prepositional logic in detail.


Ans :- The approach we take in this section is based on testing the satisfiability of a
Logical sentence rather than on proving a theorem. We will be finding models of
Propositional sentences that look like this:
Initial state ∧ all possible action descriptions ∧ goal
The sentence will contain proposition symbols corresponding to every possible
Action occurrence; a model that satisfies the sentence will assign true to the actions
That are part of a correct plan and false to the others. An assignment that
Corresponds to an incorrect plan will not be a model, because it will be
Inconsistent with the assertion that the goal is true. If the planning problem is
Unsolvable, then the sentence will be unsatisfiable.
Describing planning problems in propositional logic
The process we will follow to translate STRIPS problems into propositional logic
Is a textbook example (so to speak) of the knowledge representation cycle: We will
Begin with what seems to be a reasonable set of axioms, we will find that these
Axioms allow for spurious unintended models, and we will write more axioms. Let
Us begin with a very simple air transport problem. In the initial state (time 0),
Plane P1 is at SFO and plane P2 is at JFK. The goal is to have P1 at JFK and P2 at
SFO; that is, the planes are to change places. First, we will need distinct
Proposition symbols for assertions about each time step. We will use superscripts
To denote the time step. Thus, the initial state will be written as At(P1, SFO) 0 ∧
At(P2, JFK) 0 . (Remember that At(P1, SFO) 0 is an atomic symbol.) Because
Propositional logic has no closed-world assumption, we must also specify the
Propositions that are not true in the initial state. If some propositions are unknown
In the initial state, then they can be left unspecified (the open world assumption).
In this example we specify:

¬At(P1, JFK) 0 ∧ ¬At(P2, SFO) 0

The goal itself must be associated with a particular time step. Since we do not
Know a priori how many steps it takes to achieve the goal, we can try asserting that
The goal is true in the initial state, time T = 0. That is, we assert At(P1, JFK) 0 ∧
At(P2, SFO) 0 . If that fails, we try again with T = 1, and so on until we reach the
Minimum feasible plan length. For each value of T, the knowledge base will
Include only sentences covering the time steps from 0 up to T. To ensure
Termination, an arbitrary upper limit, Tmax, is imposed.
The SATPLAN Algorithm is as follows:
Function SATPLAN(problem, T max) returns solution or failure
Inputs: problem, a planning problem
T max, an upper limit for plan length
For T = 0 to T max do
Cnf , mapping ← TRANSLATE-TO-SAT(problem, T)
Assignment ← SAT-SOLVER(cnf)
If assignment is not null then
Return EXTRACT-SOLUTION(assignment, mapping)
Return failure
The next issue is how to encode action descriptions in propositional logic. The
Most straightforward approach is to have one proposition symbol for each action
Occurrence; for example, Fly(P1, SFO, JFK) 0 is true if plane P1 flies from SFO to
JFK at time 0. For example, we have
At(P1, JFK) 1 ⇔ (At(P1, JFK) 0 ∧ ¬(Fly(P1, JFK, SFO) 0 ∧ At(P1, JFK) 0 )) ∨

(Fly(P1, SFO, JFK) 0 ∧ At(P1, SFO) 0 )


That is, plane P1 will be at JFK at time 1 if it was at JFK at time 0 and didn’t fly
Away, or it was at SFO at time 0 and flew to JFK. We need one such axiom for

Each plane, airport, and time step. Moreover, each additional airport adds another
Way to travel to or from a given airport and hence adds more disjuncts to the
Right-hand side of each axiom.
With these axioms in place, we can run the satisfiability algorithm to find a plan.
There ought to be a plan that achieves the goal at time T = 1, namely, the plan in
Which the two planes swap places. Now, suppose the KB is initial state ∧
Successor-state axioms ∧ goal1 ,
Which asserts that the goal is true at time T = 1. You can check that the assignment
In which
Fly(P1, SFO, JFK) 0 and Fly(P2, JFK, SFO) 0
Are true and all other action symbols are false is a model of the KB.

Que 6
A) describe properties of backword state-space search algorithm.
Ans :- BSSP behaves similarly to backward state-space search. In this, we move from the
Target state g to the sub-goal g, tracing the previous action to achieve that goal.
This process is called regression (going back to the previous goal or sub-goal).
These sub-goals should also be checked for consistency. The action should be
Relevant in this case.
● Disadvantages: not sound algorithm (sometimes inconsistency can be
Found)
● Advantage: Small branching factor (much smaller than FSSP)
So for an efficient planning system, we need to combine the features of FSSP and
BSSP, which gives rise to target stack planning which will be discussed in the next
Article.

B) what is graph plan? Explain termination of it.


Ans :- ● Planning graphs are an efficient way to create a representation of a planning
Problem that can be used to
Achieve better heuristic estimates
Directly construct plans
● Planning graphs only work for propositional problems.
● Planning graphs consist of a sequence of levels that correspond to time
Steps in the plan.

Level 0 is the initial state.


Each level consists of a set of literals and a set of actions that
Represent what might be possible at that step in the plan
Might be is the key to efficiency
Records only a restricted subset of possible negative interactions
Among actions.
● Each level consists of
Literals = all those that could be true at that time step, depending
Upon the actions executed at preceding time steps.
Actions = all those actions that could have their preconditions
Satisfied at that time step, depending on which of the literals actually hold.

Que 7
A) What do you mean by uncertainty? State various causes of uncertainty.
Ans:- Uncertainty:
Till now, we have learned knowledge representation using first-order logic and propositional
logic with certainty, which means we were sure about the predicates. With this knowledge
representation, we might write A→B, which means if A is true then B is true, but consider a
situation where we are not sure about whether A is true or not then we cannot express this
statement, this situation is called uncertainty.
So to represent uncertain knowledge, where we are not sure about the predicates, we need
uncertain reasoning or probabilistic reasoning.
Causes of uncertainty:
Following are some leading causes of uncertainty to occur in the real world.
Information occurred from unreliable sources.
Experimental Errors
Equipment fault
Temperature variation
Climate change.

B) What do you mean by probabilistic reasoning? State and explain Bayes'


Theorem using probabilistic reasoning
Ans :- Probabilistic reasoning is a way of knowledge representation where we apply the
concept of probability to indicate the uncertainty in knowledge. In probabilistic reasoning, we
combine probability theory with logic to handle the uncertainty.
We use probability in probabilistic reasoning because it provides a way to handle the
uncertainty that is the result of someone's laziness and ignorance.
Baye’s theorem
Baye’s theorem is also known as Bayes' rule, Bayes' law, or Bayesian reasoning, which
determines the probability of an event with uncertain knowledge.
In probability theory, it relates the conditional probability and marginal probabilities of two
random events.
Bayes' theorem was named after the British mathematician Thomas Bayes. The Bayesian
inference is an application of Bayes' theorem, which is fundamental to Bayesian statistics.
Example: If cancer corresponds to one's age then by using Bayes' theorem, we can determine
the probability of cancer more accurately with the help of age.
Bayes' theorem can be derived using product rule and conditional probability of event A with
known event B:
As from product rule we can write:
P(A ⋀ B)= P(A|B) P(B) or  
Similarly, the probability of event B with known event A:
P(A ⋀ B)= P(B|A) P(A)  
Equating right hand side of both the equations, we will get:
The above equation (a) is called as Bayes' rule or Bayes' theorem. This equation is basic of
most modern AI systems for probabilistic inference.
It shows the simple relationship between joint and conditional probabilities.

Que 8
A) Explain Kalman Filters with suitable examples.
Ans :- A Kalman Filter is an algorithm that takes data inputs from multiple sources
and estimates unknown variables, despite a potentially high level of signal noise. Often used
in navigation and control technology, the Kalman Filter has the advantage of being able to
predict unknown values more accurately than if individual predictions are made using
singular methods of measurement.
WORKING:-
Kalman Filters use a two-step process for estimating unknown variables. The algorithm
works by first estimating the current state variables, and measures their uncertainties. Then,
the algorithm updates the estimates using a weighted average, wherein more weight is
attributed to estimates with higher levels of uncertainty. Because the filter takes in
information from multiple sources, both current state and predicted state, the filter is able to
provide a level of accuracy higher than if estimates were made given only one of the multiple
sources.

Kalman Filter and Machine Learning


One of the most common uses for the Kalman Filter is in navigation and positioning
technology. Imagine a car with a GPS transmitter is traveling down a mountain road. A
Kalman Filter can be applied to take in the GPS data from the car, however GPS devices are
not always entirely accurate. So, the Kalman Filter can take in speed and velocity data to
adjust the rate of change in the cars position over time. Naturally, given the laws of physics,
the level of variable uncertainty is lower when the car is traveling faster, and vice versa. All
of this information is used to predict where the car will be, and then the process is repeated
with updated information as the car travels down the road. Because the Kalman Filter is
recursive, it doesn't need to know the entirety of the cars position and speed data, but rather
just the last known position and speed. The underlying model of updating information is
similar to that of a Hidden Markov model.

B) Explain different Statistical learning methods in Al


Ans :- Machine learning can be subdivided intothe main three types:
Supervised learning:
Supervised leaming is a type of machine learning in which machine leam from known
datasets (set of training examples), and then predict the output. A supervised learning agent
needs to find out the function that matches a given
Sample set Supervised learning further can be classified into two categories of algorithms: 1)
1) Classifications
2) Regression
Reinforcement learning:
Reinforcement learning is a type of learning in which an Al agent is trained by giving some
commands, and on each action, an agent gets a reward as a feedback Using these feedbacks,
agent improves its performance. Reward feedback can be positive or negative which means
on cach good action, agent receives a positive reward while for wrong action, it gets a
negative reward. Reinforcement learning is of two types:
1) Positive Reinforcement learning
2) Negative Reinforcement learning
Unsupervised learning:
Unsupervised learning is associated with learning without supervision or training In
unsupervised learning, the algorithms are trained with data which is neither labeled nor
classified. In unsupervised learning, the agent needs to learn from patterns without
corresponding output values.
Unsupervised learning can be classified into two categories of algorithms:
1) Clustering 
2) Association

B) define robotic perception. Explain in details.


Ans :- Robotic perception
Perception is a process to interpret, acquire, select and then organize the sensory information
that captured from the real world. For example: Human beings have sensory receptors such as
touch, taste, smell, sight and hearing. So, the information received from these receptors is
transmitted to human brain to organize the received information.
According to the received information, action is taken by interacting with the environment to
manipulate and navigate the objects. Perception and action are very important concepts in the
field of Robotics. The following figures show the complete autonomous robot.
Perception
Physical World
Cognition Action

Fig: Autonomous Robol


There is one important difference between the artificial intelligence program and robot. The
Al program performs in a computer stimulated environment, while the robot performs in the
physical world. For example:
In chess, an Al program can be able to make a move by searching different nodes
And has no facility to touch or sense the physical world.
However, the chess playing robot can make a move and grasp the pieces by
Interacting with the physical world.

B) state and explain ethics pf artificial intelligence in robotics.


Ans:- Artificial Intelligence ethics, or Al ethics, comprise a set of values, principles, and
Techniques which employ widely accepted standards of right and wrong to guide moral
conduct in the development and deployment of Artificial Intelligence technologies. Robot
ethics, also known as roboethics or machine ethics, is concerned with what rules should be
applied to ensure the ethical behavior of robots as well as how to
Design ethical robots. Roboethics deals with concerns and moral dilemmas such as
Whether robots will pose a threat to humans in the long run, or whether using some IRobots,
such as killer robots in wars, can become problematic for humanity.
Roboticists must guarantee that autonomous systems can exhibit ethically acceptable
behavior in situations where robots, Al systems, and other autonomous systems such as self-
driving vehicles interact with humans.
Artificial Intelligence, automation, and Al ethics
Artificial Intelligence (AI) and automation are dramatically changing and influencing our
society. Applying principles of Al ethics to the design and implementation of algorithmic or
intelligent systems and Al projects in the public sector is paramount. Al ethics will guaranted
that the development and deployment of Artificial Intelligence are ethical, safe, and
uttermostresponsible.
The new interconnected digital world powered by 5G technology is delivering great potential
and rapid gains in the power of Artificial Intelligence to better society. Innovation and
implementation of Al are already making an impact on improving services from healthcare,
education, and transportation to the food supply chain, energy, and environmental
management plans, to mention just a few.
With the rapid advancements in computing power and access to vast amounts of big data,
Artificial Intelligence and Machine Leaming systems will continue to improve and evolve. In
just a few years into the future, Al systems will be able to process and use data not only at
even more speed but also with more accuracy

Que 10
A) explain how planning of uncertain movements in AI works.
Ans:- Markov Decision Process (MDP)
 MDP is a general approach to considering uncertainty.
 Determines model of the environment.
 Discretized state space.
 Requires explicitly defining transition probabilities between states.
 We can use dynamic programming to solve the MDP Stochastic Motion Roadmap
 Combines a roadmap representation of configuration space with the theory Of mdps
 Maximizes the probability of success Uses sampling to learn the configuration space
(represented as states)
 Learn the stochastic motion model (represented as state transitionProbabilities)
 Discretized state space
 Discretizes actions
stochastic Motion Roadmap
 Learning Phase
 Selecting random sample of discrete states
 Sample the robots motion model to build a Stochastic Motion Roadmap(SMR)
 Calculating transition probabilities for each action
 Query Phase
 Specify initial and goal states
 Roadmap is used to find a feasible path
 Possibly optimizing some criteria such as minimum length
Building the roadmap
 Maximizing probability of success
 It is an MDP and has the form of the Bellman equation
 Where and
 It can be optimally solved using infinite horizon dynamic programming

B) state and explain risks of artificial intelligence in robotics.


Ans :-
1. Job Automation
Experts agree that job automation is the most immediate risk of Al applications. According to
a 2019 study by the Brookings Institution, automation threatens about 25 percent of American
jobs. The study found that automation would impact low-wage earners, especially those in
food-service, office management, and administration. Jobs with repetitive tasks are the most
vulnerable, but as machineLearning algorithms become more sophisticated, jobs requiring
degrees could be more at risk as well
2. Fairness and Bias Concerns
One perceived advantage of Al is that algorithms cán make fair decisions, unencumbered by
human bias. But an Al system's decisions are only as good as the data it's trained on. If a
particular population is underrepresented in the data used to train a machine learning model,
the model's output could be unfairly discriminatory towards that population. Facial
recognition technologies are the latest applications to come under scrutiny, but there have
already been historic cases of bias in the last few years.
3. Accidents and Physical Safety Considerations
If left unchecked, it's possible for Al's imperfections to cause physical harm. Let's look at
self-driving cars, an Al application that is beginning to take hold in today's automobile
market. If a self-driving car malfunctions and goes off-course, that poses an immediate risk to
the passenger, other drivers on the road, and pedestrians
4. Malicious Use of AI
Al researchers have managed to do a lot of good with the technology's applications. But in
the wrong hands, AI systems can be used for malicious or even dangerous purposes. In a
2018 report titled "The Malicious Use of Artificial Intelligence: Forecasting. Prevention, and
Mitigation," experts and researchers found that malicious use of AI technology could threaten
our digital, physical, and political security.

 Digital Security: Machine learning algorithms could conceivably be used to automate


vulnerability identification, which hackers could then exploit. And while autonomous
software has been used to hack vulnerabilities for quite some time, experts worry that
more sophisticated hacking algorithms will be able to Exploit vulnerabilities faster
and do more damage.
 Physical Security: Autonomous weapons systems are another commonly cited AI
risk. Machines programmed to destroy or kill are a frightening prospect,And a
potential Al arms race between nations is even worse. In the United States, the
Defense Innovations Board has established guidelines around ethical development of
autonomous weapons, but governments around the world are still deciding how to
regulate such machines.
 Political Security: Machine learning technology could be leveraged to automate
hyper-personalized disinformation campaigns in key districts during an election. In
another scenario, some researchers think that Natural Language Processing (NLP)
technology could be used to create a fraudulent recording of a politician making
inflammatory statements, tanking their campaign.

You might also like