Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

Lab Manual

Submitted By:

Muhammad Waleed Sabir

Roll No:
586403

Class:
BS (Computer Science)

Semester:
6th

Session:
2020 – 2024

Course Title:
Artificial Intelligence

Punjab College Jaranwala


(Affiliated with)

Government College University Faisalabad

1
Contents
Artificial Neural Networks: -........................................................................................................................3
Applications of Artificial Neural Networks: -................................................................................................3
Difference between BFS and DFS: -.............................................................................................................4
Heuristics: -..................................................................................................................................................5
Types of heuristics:..................................................................................................................................5
Limitation of heuristics:...............................................................................................................................6
Hill Climbing Algorithm: -.............................................................................................................................6
Decision Tree: -............................................................................................................................................7
Tic-Tac-Toe: -...............................................................................................................................................8
K-Nearest Neighbor (KNN) Algorithm: -.......................................................................................................9
Applications of the KNN Algorithm:.........................................................................................................9
Advantages of the KNN Algorithm.........................................................................................................10
Disadvantages of the KNN Algorithm....................................................................................................10

2
Artificial Neural Networks: -

Artificial Neural Networks contain artificial neurons which are called units. These units are
arranged in a series of layers that together constitute the whole Artificial Neural Network in a
system. In the majority of neural networks, units are interconnected from one layer to
another. Each of these connections has weights that determine the influence of one unit on
another unit.
As the data transfers from one unit to another, the neural network learns more
and more about the data which eventually results in an output from the output layer.

Applications of Artificial Neural Networks: -

1.Social Media:
Artificial Neural Networks are used heavily in Social Media .For example, let’s take
the ‘People you may know’ feature on Facebook that suggests people that you might know
in real life so that you can send them friend requests. Well, this magical effect is achieved by
using Artificial Neural Networks that analyze your profile, your interests, your current
friends, and also their friends and various other factors to calculate the people you might

3
potentially know. Another common application of Machine Learning in social media is facial
recognition.
2.Marketing and Sales:
When you log onto E-commerce sites like Amazon and Flipkart, they will
recommend your products to buy based on your previous browsing history. Similarly,
suppose you love Pasta, then Zomato, Swiggy, etc.
3.Healthcare:
Artificial Neural Networks are used in Oncology to train algorithms that can identify
cancerous tissue at the microscopic level at the same accuracy as trained physicians. Various
rare diseases may manifest in physical characteristics and can be identified in their premature
stages by using Facial Analysis on the patient photos.
Personal Assistants:
I am sure you all have heard of Siri, Alexa, Cortana, etc., and also heard
them based on the phones you have!!! These are personal assistants and an example of speech
recognition that uses Natural Language Processing to interact with the users and formulate a
response accordingly.

Difference between BFS and DFS: -

1.Breadth-First Search:
BFS, Breadth-First Search, is a vertex-based technique for finding the shortest path in the
graph. It uses a Queue data structure that follows first in first out. In BFS, one vertex is selected
at a time when it is visited and marked then its adjacent are visited and stored in the queue.

Example:
Input:
A
/\
B C
/ /\
D E F
Output:

4
A, B, C, D, E, F

2.Depth First Search:


DFS, Depth First Search, is an edge-based technique. It uses the Stack data structure
and performs two stages, first visited vertices are pushed into the stack, and second if there are
no vertices then visited vertices are popped.
Example:
Input:
A
/\
B D
/ /\
C E F

Output:
A, B, C, D, E, F

Heuristics: -
A heuristic is a technique that is used to solve a problem faster than the classic
methods. These techniques are used to find the approximate solution of a problem when
classical methods do not. Heuristics are said to be the problem-solving techniques that result in
practical and quick solutions.

Types of heuristics:
There are various types of heuristics, including the availability heuristic, affect heuristic and
representative heuristic. Each heuristic type plays a role in decision-making. Let's discuss about
the Availability heuristic, affect heuristic, and Representative heuristic.

1.Availability heuristic:
Availability heuristic is said to be the judgment that people make regarding the likelihood of an
event based on information that quickly comes into mind. On making decisions, people typically
rely on the past knowledge or experience of an event. It allows a person to judge a situation
based on the examples of similar situations that come to mind.

5
2.Representative heuristic:
It occurs when we evaluate an event's probability on the basis of its similarity with another
event.
Example:
We can understand the representative heuristic by the example of product packaging, as
consumers tend to associate the products quality with the external packaging of a product. If a
company packages its products that remind you of a high quality and well-known product, then
consumers will relate that product as having the same quality as the branded product.

Limitation of heuristics:
Along with the benefits, heuristic also has some limitations.
Although heuristics speed up our decision-making process and also help us to solve problems,
they can also introduce errors just because something has worked accurately in the past, so it
does not mean that it will work again.
It will hard to find alternative solutions or ideas if we always rely on the existing solutions or
heuristics.

Hill Climbing Algorithm: -


It is a technique for optimizing the mathematical problems. Hill Climbing is widely used when a
good heuristic is available.

It is a local search algorithm that continuously moves in the direction of increasing


elevation/value to find the mountain's peak or the best solution to the problem. It terminates
when it reaches a peak value where no neighbor has a higher value. Traveling-salesman
Problem is one of the widely discussed examples of the Hill climbing algorithm, in which we
need to minimize the distance traveled by the salesman.

Step 1: Evaluate the initial state. If it is the goal state, then return success and Stop.

Step 2: Loop Until a solution is found or there is no new operator left to apply.

Step 3: Select and apply an operator to the current state.

Step 4: Check new state:

Step 5: Exit.

6
A* Search Algorithm:
A* search is the most commonly known form of best-first search. It uses the
heuristic function h(n) and cost to reach the node n from the start state g(n). It has combined
features of UCS and greedy best-first search, by which it solve the problem efficiently.

Algorithm of A* search:
Step 1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not. If the list is empty, then return failure and stops.

Step 3: Select the node from the OPEN list which has the smallest value of the evaluation
function (g+h). If node n is the goal node, then return success and stop, otherwise.

Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each
successor n', check whether n' is already in the OPEN or CLOSED list. If not, then compute the
evaluation function for n' and place it into the Open list.

Decision Tree: -
A decision tree is one of the most powerful tools of supervised learning algorithms used
for both classification and regression tasks. It builds a flowchart-like tree structure where each
internal node denotes a test on an attribute, each branch represents an outcome of the test,
and each leaf node (terminal node) holds a class label.

Decision Tree Terminologies:


Root Node: It is the topmost node in the tree, which represents the complete dataset. It is
the starting point of the decision-making process.
Decision/Internal Node: A node that symbolizes a choice regarding an input feature.
Branching off of internal nodes connects them to leaf nodes or other internal nodes.
Leaf/Terminal Node: A node without any child nodes that indicates a class label or a
numerical value.
Splitting: The process of splitting a node into two or more sub-nodes using a split criterion
and a selected feature.

7
Branch/Sub-Tree: A subsection of the decision tree starts at an internal node and ends at the
leaf nodes.
Parent Node: The node that divides into one or more child nodes.
Child Node: The nodes that emerge when a parent node is split.

Tic-Tac-Toe: -

A Tic-Tac-Toe board is given after some moves are played. Find out if the given board is valid,
i.e., is it possible to reach this board position after some moves or not.

Example 1:

Input:

board[] = {'X', 'X', 'O',

'O', 'O', 'X',

'X', 'O', 'X'};

Output: Valid

8
Explanation: This is a valid board.

Example 2:

Input:

board[] = {'O', 'X', 'X',

'O', 'X', 'X',

'O', 'O', 'X'};

Output: Invalid

Explanation: Both X and O cannot win.

K-Nearest Neighbor (KNN) Algorithm: -


The K-Nearest Neighbors (KNN) algorithm is a robust and intuitive machine
learning method employed to tackle classification and regression problems. By capitalizing on
the concept of similarity, KNN predicts the label or value of a new data point by considering its
K closest neighbors in the training dataset.

Applications of the KNN Algorithm:

Data Preprocessing – While dealing with any Machine Learning problem we first perform the
EDA part in which if we find that the data contains missing values then there are multiple
imputation methods are available as well. One of such method is KNN Imputer which is quite
effective ad generally used for sophisticated imputation methodologies.
Pattern Recognition – KNN algorithms work very well if you have trained a KNN algorithm using
the MNIST dataset and then performed the evaluation process then you must have come across
the fact that the accuracy is too high.
Recommendation Engines – The main task which is performed by a KNN algorithm is to assign a
new query point to a pre-existed group that has been created using a huge corpus of datasets.
This is exactly what is required in the recommender systems to assign each user to a particular
group and then provide them recommendations based on that group’s preferences.
9
Advantages of the KNN Algorithm
Easy to implement as the complexity of the algorithm is not that high.
Adapts Easily – As per the working of the KNN algorithm it stores all the data in memory
storage and hence whenever a new example or data point is added then the algorithm adjusts
itself as per that new example and has its contribution to the future predictions as well.
Few Hyperparameters – The only parameters which are required in the training of a KNN
algorithm are the value of k and the choice of the distance metric which we would like to
choose from our evaluation metric.

Disadvantages of the KNN Algorithm


Does not scale – As we have heard about this that the KNN algorithm is also considered a Lazy
Algorithm. The main significance of this term is that this takes lots of computing power as well
as data storage. This makes this algorithm both time-consuming and resource exhausting.
Curse of Dimensionality – There is a term known as the peaking phenomenon according to this
the KNN algorithm is affected by the curse of dimensionality which implies the algorithm faces a
hard time classifying the data points properly when the dimensionality is too high.

10

You might also like