Professional Documents
Culture Documents
Robo
Robo
Que 1
A) list the categories of sensory perception that can be incorporated in
robots.explain any four of them in brief.
Ans:- 1) Light Sensor
2) Proximity Sensor
3) Sound Sensor
4) Temperature Sensor
5) Acceleration Sensor
1) Light Sensor
Light sensor is a transducer used for detecting light and creates a voltage
Difference equivalent to the light intensity fall on a light sensor.
The two main light sensors used in robots are Photovoltaic cells and Photo
Resistor. Other kind of light sensors like phototransistors, phototubes are rarely
Used.
The type of light sensors used in robotics are:
Photo resistor - It is a type of resistor used for detecting the light. In photo
Resistor resistance varies with change in light intensity. The light falls on photo
Resistor is inversely proportional to the resistance of the photo resistor. In general
Photo resistor is also called as Light Dependent Resistor (LDR).
Photovoltaic Cells - Photovoltaic cells are energy conversion device used to
Convert solar radiation into electrical electric energy. It is used if we are planning
To build a solar robot. Individually photovoltaic cells are considered as an energy
Source, an implementation combined with capacitors and transistors can convert
This into a sensor.
2) Proximity Sensor
Proximity sensor can detect the presence of nearby object without any physicalcontact. The
working of a proximity sensor is simple. In proximity sensortransmitter transmits an
electromagnetic radiation and receiver receives a Analyzes the return signal for interruptions.
Therefore the amount of light receiver
Receives by surrounding can be used for detecting the presence of nearby object.
Consider the types of proximity sensors used in robotics are:-
Infrared (IR) Transceivers - In IR sensor LED transmit the beam of IR light and
If it find an obstacle then the light is reflected back which is captured by an IR receiver.
Ultrasonic Sensor - In ultrasonic sensors high frequency sound waves is
Generated by transmitter, the received echo pulse suggests an object interruption.
In general ultrasonic sensors are used for distance measurement in robotic system.
3) Sound Sensor
Sound sensors are generally a microphone used to detect sound and return a
Voltage equivalent to the sound level. Using sound sensor a simple robot can be
Designed to navigate based on the sound receives.
Implementation of sound sensors is not easy as light sensors because it generates a
Very small voltage difference which will be amplified to generate measurable
Voltage change.
4) Temperature Sensor
Temperature sensors are used for sensing the change in temperature of the
Surrounding. It is based on the principle of change in voltage difference for a
Change in temperature this change in voltage will provide the equivalent
Temperature value of the surrounding.
Few generally used temperature sensors ICS are TMP35, TMP37, LM34, LM35,
etc.
Que 2
A) state an explain laws of robotics.
Ans :- The word robot was firstly introduced to public by Czech writer Karel Capek in
His play Rossum's Universal Robots (R.U.R), published in 1920. The play begins
With a factory that makes artificial people known as robots.
The word "Robotics", was coined accidentally by the Russian-born, American
Scientist, Issac Asimov in 1940s .
The three laws of Robotics:
Issac Asimov also proposed his three "Laws of Robotics", and he later added a
"zeroth law"
● Zeroth Law - A robot is not allowed to injured humanity, or, through
Inaction it allows humanity to come to harm.
● First Law - A robot can not injure a human being, or, through inaction it
Allows a human being to come to harm, unless it would violate the higher order
Law.
● Second Law - A robot should follow the orders given it by human beings,
Except when such orders give by humans would conflict with a higher order law.
● Third Law - A robot is allowed to protect its own existence as long as such
Protection would not conflict with a higher order law.
Que 3
A) describe minimax algorithm with suitable example.
Ans :- Mini-max algorithm is a recursive or backtracking algorithm which is used
In decision-making and game theory. It provides an optimal move for the player
Assuming that opponent is also playing optimally.
● Mini-Max algorithm uses recursion to search through the game-tree.
● Min-Max algorithm is mostly used for game playing in AI. Such as Chess,
Checkers, tic-tac-toe, go, and various tow-players game. This Algorithm computes
The minimax decision for the current state.
● In this algorithm two players play the game, one is called MAX and other is
Called MIN.
● Both the players fight it as the opponent player gets the minimum benefit
While they get the maximum benefit.
● Both Players of the game are opponent of each other, where MAX will
Select the maximized value and MIN will select the minimized value.
● The minimax algorithm performs a depth-first search algorithm for the
Exploration of the complete game tree.
● The minimax algorithm proceeds all the way down to the terminal node of
The tree, then backtrack the tree as the recursion.
Pseudo-code for minmax Algorithm:
1. Function minimax(node, depth, maximizingplayer) is
2. If depth ==0 or node is a terminal node then
3. Return static evaluation of node
4.
5. If maximizingplayer then // for Maximizer Player
6. Maxeva= -infinity
7. For each child of node do
8. Eva= minimax(child, depth-1, false)
9. Maxeva= max(maxeva,eva) //gives Maximum of the values
10. Return maxeva
11.
12. Else // for Minimizer player
13. Mineva= +infinity
14. For each child of node do
15. Eva= minimax(child, depth-1, true)
16. Mineva= min(mineva, eva) //gives minimum of the values
17. Return mineva
Initial call:
Minimax(node, 3, true)
Working of Min-Max Algorithm:
● The working of the minimax algorithm can be easily described using an
Example. Below we have taken an example of game-tree which is representing the
Two-player game.
● In this example, there are two players one is called Maximizer and other is
Called Minimizer.
● Maximizer will try to get the Maximum possible score, and Minimizer will
Try to get the minimum possible score.
● This algorithm applies DFS, so in this game-tree, we have to go all the way
Through the leaves to reach the terminal nodes.
● At the terminal node, the terminal values are given so we will compare
Those value and backtrack the tree until the initial state occurs. Following are the
Main steps involved in solving the two-player game tree:
Step-1: In the first step, the algorithm generates the entire game-tree and apply the
Utility function to get the utility values for the terminal states. In the below tree
Diagram, let's take A is the initial state of the tree. Suppose maximizer takes first
Turn which has worst-case initial value =- infinity, and minimizer will take next
Turn which has worst-case initial value = +infinity.
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is
-∞, so we will compare each value in terminal state with initial value of
Maximizer and determines the higher nodes values. It will find the maximum
Among the all.
● For node D max(-1,- -∞) => max(-1,4)= 4
● For Node E max(2, -∞) => max(2, 6)= 6
● For Node F max(-3, -∞) => max(-3,-5) = -3
● For node G max(0, -∞) = max(0, 7) = 7
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value
With +∞, and will find the 3
Rd
Layer node values.
Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all
Nodes value and find the maximum value for the root node. In this game tree, there
Are only 4 layers, hence we reach immediately to the root node, but in real games,
There will be more than 4 layers.
● For node A max(4, -3)= 4
That was the complete workflow of the minimax two player game.
Properties of Mini-Max algorithm:
● Complete- Min-Max algorithm is Complete. It will definitely find a
Solution (if exist), in the finite search tree.
● Optimal- Min-Max algorithm is optimal if both opponents are playing
Optimally.
● Time complexity- As it performs DFS for the game-tree, so the time
Complexity of Min-Max algorithm is O(b
M
), where b is branching factor of the
Ans :- First, we need to create a truth table with columns for A, B, A ∧ B, ¬A, ¬B, ¬A ∧
¬B, (A ∧ B) ∨ (¬A ∧ ¬B), ~A ∧ ~B, and (A ∧ B) V (~A ∧ ~B).
T T T F F F T F T
T F F F T F F F F
F T F T F F F F F
A B A ∧ B ¬A ¬B ¬A ∧ ¬B (A ∧ B) ∨ (¬A ∧ ¬B) ~A ∧ ~B (A ∧ B) V (~A ∧ ~B)
F F F T T T T T T
In the truth table, we first fill in the columns for A, B, A ∧ B, ¬A, and ¬B according to the
truth values of A and B. Next, we fill in the column for ¬A ∧ ¬B by taking the logical
conjunction of the ¬A and ¬B columns. First, we need to create a truth table with columns for
A, B, A ∧ B, ¬A, ¬B, ¬A ∧ ¬B, (A ∧ B) ∨ (¬A ∧ ¬B), ~A ∧ ~B, and (A ∧ B) V (~A ∧ ~B).
Ii) ) A̅ V B ≡ A → B
To show that A̅ V B is logically equivalent to A → B, we can use a truth table:
A B A̅ A̅ V B A→B
T T F T T
T F F F F
F T T T T
F F T T T
In the truth table, we first fill in the columns for A and B according to their truth values.
Next, we fill in the column for A̅ by taking the negation of A.
Que 4
A) describe α - β Pruning with suitable algorithm.
Ans :- ● Alpha-beta pruning is a modified version of the minimax algorithm. It is an
Optimization technique for the minimax algorithm.
● As we have seen in the minimax search algorithm that the number of game
States it has to examine are exponential in depth of the tree. Since we cannot
Eliminate the exponent, but we can cut it to half. Hence there is a technique by
Which without checking each node of the game tree we can compute the correct
Minimax decision, and this technique is called pruning. This involves two
Threshold parameter Alpha and beta for future expansion, so it is called alpha-beta
Pruning. It is also called as Alpha-Beta Algorithm.
● Alpha-beta pruning can be applied at any depth of a tree, and sometimes it
Not only prune the tree leaves but also entire sub-tree.
● The two-parameter can be defined as:
A. Alpha: The best (highest-value) choice we have found so far at any point
Along the path of Maximizer. The initial value of alpha is -∞.
B. Beta: The best (lowest-value) choice we have found so far at any point
Along the path of Minimizer. The initial value of beta is +∞.
● The Alpha-beta pruning to a standard minimax algorithm returns the same
Move as the standard algorithm does, but it removes all the nodes which are not
Really affecting the final decision but making algorithm slow. Hence by pruning
These nodes, it makes the algorithm fast.
Pseudo-code for Alpha-beta Pruning:
1. Function minimax(node, depth, alpha, beta, maximizingplayer) is
2. If depth ==0 or node is a terminal node then
3. Return static evaluation of node
4.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value
Of α is compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value
Of α at node D and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as
This is a turn of Min, Now β= +∞, will compare with the available subsequent
Nodes value, i.e. Min (∞, 3) = 3, hence at node B now α= -∞, and β= 3. In the next step,
algorithm traverse the next successor of Node B which is node E,
And the values of α= -∞, and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The
Current value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node
E α= 5 and β= 3, where α>=β, so the right successor of E will be pruned, and
Algorithm will not traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A.
At node A, the value of alpha will be changed the maximum available value is 3 as
Max (-∞, 3)= 3, and β= +∞, these two values now passes to right successor of A
Which is Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0,
And max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3
Still α remains 3, but the node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the
Value of beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C,
Α=3 and β= 1, and again it satisfies the condition α>=β, so the next child of C
Which is G will be pruned, and the algorithm will not compute the entire sub-tree
G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) =
3. Following is the final game tree which is the showing the nodes which are
Computed and nodes which has never computed. Hence the optimal value for the
Maximizer is 3 for this example.
Que 5
A) Explain Backward State Space Planning (BSSP) in detail.
Ans :- BSSP behaves similarly to backward state-space search. In this, we move from the
Target state g to the sub-goal g, tracing the previous action to achieve that goal.
This process is called regression (going back to the previous goal or sub-goal).
These sub-goals should also be checked for consistency. The action should be
Relevant in this case.
● Disadvantages: not sound algorithm (sometimes inconsistency can be
Found)
● Advantage: Small branching factor (much smaller than FSSP)
So for an efficient planning system, we need to combine the features of FSSP and
BSSP, which gives rise to target stack planning which will be discussed in the next
Article.
The goal itself must be associated with a particular time step. Since we do not
Know a priori how many steps it takes to achieve the goal, we can try asserting that
The goal is true in the initial state, time T = 0. That is, we assert At(P1, JFK) 0 ∧
At(P2, SFO) 0 . If that fails, we try again with T = 1, and so on until we reach the
Minimum feasible plan length. For each value of T, the knowledge base will
Include only sentences covering the time steps from 0 up to T. To ensure
Termination, an arbitrary upper limit, Tmax, is imposed.
The SATPLAN Algorithm is as follows:
Function SATPLAN(problem, T max) returns solution or failure
Inputs: problem, a planning problem
T max, an upper limit for plan length
For T = 0 to T max do
Cnf , mapping ← TRANSLATE-TO-SAT(problem, T)
Assignment ← SAT-SOLVER(cnf)
If assignment is not null then
Return EXTRACT-SOLUTION(assignment, mapping)
Return failure
The next issue is how to encode action descriptions in propositional logic. The
Most straightforward approach is to have one proposition symbol for each action
Occurrence; for example, Fly(P1, SFO, JFK) 0 is true if plane P1 flies from SFO to
JFK at time 0. For example, we have
At(P1, JFK) 1 ⇔ (At(P1, JFK) 0 ∧ ¬(Fly(P1, JFK, SFO) 0 ∧ At(P1, JFK) 0 )) ∨
Each plane, airport, and time step. Moreover, each additional airport adds another
Way to travel to or from a given airport and hence adds more disjuncts to the
Right-hand side of each axiom.
With these axioms in place, we can run the satisfiability algorithm to find a plan.
There ought to be a plan that achieves the goal at time T = 1, namely, the plan in
Which the two planes swap places. Now, suppose the KB is initial state ∧
Successor-state axioms ∧ goal1 ,
Which asserts that the goal is true at time T = 1. You can check that the assignment
In which
Fly(P1, SFO, JFK) 0 and Fly(P2, JFK, SFO) 0
Are true and all other action symbols are false is a model of the KB.
Que 6
A) describe properties of backword state-space search algorithm.
Ans :- BSSP behaves similarly to backward state-space search. In this, we move from the
Target state g to the sub-goal g, tracing the previous action to achieve that goal.
This process is called regression (going back to the previous goal or sub-goal).
These sub-goals should also be checked for consistency. The action should be
Relevant in this case.
● Disadvantages: not sound algorithm (sometimes inconsistency can be
Found)
● Advantage: Small branching factor (much smaller than FSSP)
So for an efficient planning system, we need to combine the features of FSSP and
BSSP, which gives rise to target stack planning which will be discussed in the next
Article.
Que 7
A) What do you mean by uncertainty? State various causes of uncertainty.
Ans:- Uncertainty:
Till now, we have learned knowledge representation using first-order logic and propositional
logic with certainty, which means we were sure about the predicates. With this knowledge
representation, we might write A→B, which means if A is true then B is true, but consider a
situation where we are not sure about whether A is true or not then we cannot express this
statement, this situation is called uncertainty.
So to represent uncertain knowledge, where we are not sure about the predicates, we need
uncertain reasoning or probabilistic reasoning.
Causes of uncertainty:
Following are some leading causes of uncertainty to occur in the real world.
Information occurred from unreliable sources.
Experimental Errors
Equipment fault
Temperature variation
Climate change.
Que 8
A) Explain Kalman Filters with suitable examples.
Ans :- A Kalman Filter is an algorithm that takes data inputs from multiple sources
and estimates unknown variables, despite a potentially high level of signal noise. Often used
in navigation and control technology, the Kalman Filter has the advantage of being able to
predict unknown values more accurately than if individual predictions are made using
singular methods of measurement.
WORKING:-
Kalman Filters use a two-step process for estimating unknown variables. The algorithm
works by first estimating the current state variables, and measures their uncertainties. Then,
the algorithm updates the estimates using a weighted average, wherein more weight is
attributed to estimates with higher levels of uncertainty. Because the filter takes in
information from multiple sources, both current state and predicted state, the filter is able to
provide a level of accuracy higher than if estimates were made given only one of the multiple
sources.
Que 10
A) explain how planning of uncertain movements in AI works.
Ans:- Markov Decision Process (MDP)
MDP is a general approach to considering uncertainty.
Determines model of the environment.
Discretized state space.
Requires explicitly defining transition probabilities between states.
We can use dynamic programming to solve the MDP Stochastic Motion Roadmap
Combines a roadmap representation of configuration space with the theory Of mdps
Maximizes the probability of success Uses sampling to learn the configuration space
(represented as states)
Learn the stochastic motion model (represented as state transitionProbabilities)
Discretized state space
Discretizes actions
stochastic Motion Roadmap
Learning Phase
Selecting random sample of discrete states
Sample the robots motion model to build a Stochastic Motion Roadmap(SMR)
Calculating transition probabilities for each action
Query Phase
Specify initial and goal states
Roadmap is used to find a feasible path
Possibly optimizing some criteria such as minimum length
Building the roadmap
Maximizing probability of success
It is an MDP and has the form of the Bellman equation
Where and
It can be optimally solved using infinite horizon dynamic programming