Professional Documents
Culture Documents
AI pastpaper solve by M.Noman Tariq (2)
AI pastpaper solve by M.Noman Tariq (2)
AI pastpaper solve by M.Noman Tariq (2)
Machine Learning is a subfield of artificial intelligence (AI) that focuses on the development of
algorithms and models that enable computers to learn and make predictions or decisions without being
explicitly programmed. There are three main types of machine learning:
Supervised Learning:
In supervised learning, the algorithm is trained on a labeled dataset, where each data point is paired
with the correct output or target. The algorithm learns to map input data to the correct output by
finding patterns and relationships in the labeled examples. Common applications include image
classification, spam email detection, and regression problems.
Unsupervised Learning:
Unsupervised learning involves training the algorithm on an unlabeled dataset, where there are no
predefined output labels. The goal is to discover hidden patterns, structure, or clusters in the data.
Common techniques include clustering, dimensionality reduction, and density estimation. Examples
include customer segmentation and anomaly detection.
Reinforcement Learning:
Reinforcement learning is a type of machine learning where an agent learns to make decisions by
interacting with an environment. The agent receives feedback in the form of rewards or penalties based
on its actions, and it aims to maximize its cumulative reward over time. This type of learning is often
used in robotics, game playing, and autonomous systems.
Semi-Supervised Learning:
Semi-supervised learning is a type of machine learning that combines elements of both supervised and
unsupervised learning. In semi-supervised learning, the training dataset contains a combination of
labeled and unlabeled data. The algorithm uses the small amount of labeled data to learn patterns and
then generalizes that knowledge to make predictions on the larger set of unlabeled data.
Can solve the problem in polynomial Can’t solve the problem in polynomial
time. time.
Like linear search and binary search like 0/1 knapsack problem.
Examples of non-deterministic
Examples of deterministic algorithms
algorithms include probabilistic
include sorting algorithms like bubble
algorithms like Monte Carlo methods,
sort, insertion sort, and selection sort,
genetic algorithms, and simulated
as well as many numerical algorithms.
annealing.
Algorithm
Types of Agents:
Simple Reflex Agents:
These agents make decisions based solely on the current percept or input from their environment. They
follow predefined rules or condition-action pairs and do not consider past actions or future
consequences. Simple reflex agents are limited in their ability to handle dynamic or complex
environments.
Goal-Based Agents:
Goal-based agents have explicit goals or objectives they aim to achieve. They use their internal models
and reasoning capabilities to plan a sequence of actions that will lead to the desired outcome. These
agents can adapt to changing conditions and are more flexible than reflex agents.
Utility-Based Agents:
Utility-based agents make decisions by considering not only their goals but also the expected utility or
value associated with different actions and outcomes. They prioritize actions that maximize their
expected utility. This type of agent is commonly used in economics and decision theory.
1. Supervised Learning:
Supervised learning is a machine learning paradigm in which algorithms learn patterns and relationships
from a labeled dataset. In supervised learning, the dataset used for training consists of input data paired
with corresponding output labels. The primary goal is for the algorithm to learn a mapping function that
can accurately predict or classify new, unseen data based on its input.
For example, in image classification, the algorithm is trained on a dataset of images where each image is
associated with a label (e.g., cat or dog). The algorithm learns to recognize features and patterns in the
images that are indicative of the labels. Once trained, it can classify new, unlabeled images into the
appropriate categories. Supervised learning is widely used in various applications, including natural
language processing for sentiment analysis, speech recognition, and recommendation systems.
2. Unsupervised Learning:
Unsupervised learning is a machine learning approach where algorithms analyze unlabeled data to
discover inherent structures, patterns, or relationships within the data. Unlike supervised learning, there
are no predefined output labels in unsupervised learning. Instead, the algorithm's objective is to find
meaningful groupings or representations within the data.
Clustering:
Clustering algorithms group similar data points together based on their features.
Dimensionality Reduction:
Dimensionality reduction techniques aim to reduce the complexity of data by transforming it into a
lower-dimensional representation while preserving essential information.
3. Reinforcement Learning:
Reinforcement learning is a machine learning paradigm where an agent interacts with an environment
and learns to make a sequence of decisions to maximize a cumulative reward. In this learning process,
the agent receives feedback from the environment in the form of rewards or penalties based on its
actions. The primary goal is for the agent to discover an optimal policy (a strategy) that leads to the
highest long-term reward.
Frames:
Frames are structures used to organize information about objects. Think of frames as templates that
help AI systems make sense of the world. Each frame has slots, which are like labeled boxes where you
Step 2: Else, select the most significant difference and reduce it by doing the following steps until the
success or failure occurs.
Select a new operator O which is applicable for the current difference, and if there is no such operator,
then signal failure.
ii) O-Result, the state that would result if O were applied In O-start.
If
And
(LAST-Part <----- MEA (O-Result, GOAL), are successful, then signal Success and return the result
of combining FIRST-PART, O, and LAST-PART.
Find the values of h1, h2, o1, 02, e1, e2 and total error rate of forward
propagation in neural networks
h1,
O1 and o2
E1 and E2
Best First Search uses a priority queue (min-heap) to keep track of the nodes to be explored. Nodes are
dequeued based on their heuristic or evaluation function values. The node with the lowest heuristic
value is explored first.
Depth First Search uses a stack (either explicitly implemented or the call stack in a recursive
implementation) to keep track of nodes. It explores a branch of the graph as deeply as possible before
backtracking.
Breadth First Search uses a queue to keep track of nodes. It explores all the nodes at the current level
before moving on to the next level. This ensures that nodes at a shallower depth are explored before
deeper ones, leading to a breadth-first traversal.
Step 3: Stemming
Step 4: Lemmatization
The basis of
Comparison Graph Tree
Loop
Formation A cycle can be formed. There will not be any cycle.
Run DFS on the tree make it yourself and update the data structure
accordingly. Also show the final output after traversal.
4. While the stack is not empty: a. Pop a node from the stack and process it (e.g., print its value or
update it as needed). b. Push any unvisited child nodes onto the stack, starting with the right
child (if present) and then the left child (if present). This ensures that the left child is processed
before the right child, as DFS follows the LIFO (Last-In-First-Out) order.
Object Detection:
Computer vision can locate and identify specific objects within an image or video stream. This is crucial
in various scenarios, including surveillance, autonomous vehicles, and robotics.
Facial Recognition:
Facial recognition technology uses computer vision to identify and verify individuals based on their facial
features. It has applications in security, access control, and user authentication.
Autonomous Vehicles:
Self-driving cars rely heavily on computer vision to perceive their surroundings. Cameras and sensors
capture and process visual data to make decisions about navigation and avoiding obstacles.
What is the intelligent agent in AI, and where are they us?
An intelligent agent in AI is a software or hardware system that perceives its environment, processes
information, and takes actions to achieve specific goals or objectives. These agents are designed to
mimic certain aspects of human intelligence, such as problem-solving, learning, decision-making, and
adapting to changing circumstances. Intelligent agents can operate autonomously or semi-autonomously
and are a fundamental concept in artificial intelligence and robotics.
Use of AI Agent
Reinforcement Learning:
In reinforcement learning, intelligent agents learn to make decisions by interacting with an environment
and receiving feedback in the form of rewards or punishments. These agents are used in applications like
game playing (e.g., AlphaGo), robotics, and autonomous vehicles.
Expert Systems:
Expert systems are AI applications that use knowledge-based reasoning to solve complex problems in
specific domains. They are used in fields such as healthcare for medical diagnosis, in finance for
investment advice, and in engineering for troubleshooting.
Autonomous Robots:
Autonomous robots, such as self-driving cars and drones, use intelligent agents to navigate their
environments, make decisions, and respond to changing conditions without human intervention.
Recommendation Systems:
Online platforms like Netflix, Amazon, and Spotify use intelligent agents to analyze user data and
recommend products, movies, music, or content that aligns with individual preferences.
Game Playing:
AI agents are employed in playing complex strategy games like chess, Go, and video games, often
competing at or above human skill levels.
Cybersecurity:
Intelligent agents are used to detect and respond to cybersecurity threats in real-time, helping
organizations protect their systems and data.
Industrial Automation:
In manufacturing and industrial settings, intelligent agents control and optimize processes, monitor
equipment health, and improve efficiency.
Financial Trading:
In the financial industry, intelligent agents execute high-frequency trading strategies and analyze market
data to make investment decisions.
Text Tokenization:
Text tokenization is the process of splitting a text into smaller units, typically words or phrases, called
tokens. This step is essential for breaking down text data into manageable pieces for further analysis
Syntactic Parsing:
Syntactic parsing focuses on analyzing the grammatical structure and relationships between words in a
sentence. It typically produces a parse tree or a syntactic structure that illustrates how words are
connected in a sentence. Parsing helps in understanding the hierarchy and dependencies among words
in a sentence.
Coreference Resolution:
Coreference resolution deals with identifying when different words or phrases in a text refer to the same
entity. It helps maintain context and coherence in text understanding.
The heuristic value of all states is given in the below tale so you will
traverse the given graph using the A* algorithm
Improved Interpretability:
Simpler models are often easier to interpret and explain to stakeholders. Selecting relevant attributes
can make it clearer how the model is making predictions.
Simplification of Deployment:
Models with fewer attributes are generally easier to deploy in production systems. This reduces the
complexity of integration and maintenance.
Cost Reduction:
Collecting and processing data can be costly. By selecting only the most important features, you can save
resources and reduce the expenses associated with data collection and storage.
Raw data is the fuel of machine learning algorithms. But just like we
cannot put crude oil into a car and instead we must use gasoline,
machine learning algorithms expect data to be formatted in a certain
way before the training process can begin. In order to prepare the data
for ingestion by machine learning algorithms, the data must be
preprocessed and converted into the right format. Explain following
preprocessing techniques with at-least one example each
1. Binarization
2. Mean removal
3. Scaling
4. Normalization
Data preprocessing is a critical step in preparing raw data for machine learning algorithms. These
techniques help make the data more suitable for model training by addressing issues such as varying
scales, outliers, and non-standard formats. Let's discuss each of the mentioned preprocessing techniques
with examples:
Example:
Suppose you have a dataset of temperature values in Celsius, and you want to convert it into a binary
format where temperatures above 25 degrees Celsius are considered hot (1) and temperatures below or
equal to 25 degrees Celsius are considered not hot (0).
import numpy as np
threshold = 25
print(hot_or_not)
In this example, temperatures above 25°C are converted to 1 (hot), and temperatures equal to or below
25°C are converted to 0 (not hot).
Example:
Let's say you have a dataset of exam scores, and you want to remove the mean score from each
student's exam score to make the data centered around zero.
import numpy as np
mean_score = np.mean(exam_scores)
print(centered_scores)
In this example, subtracting the mean score from each exam score results in a dataset where the mean is
approximately zero.
Scaling:
Scaling involves transforming the numerical values of a feature to fit within a specific range or scale. It
helps prevent features with larger scales from dominating the learning process and ensures that all
features contribute equally to the model.
scaler = MinMaxScaler()
scaled_data = scaler.fit_transform(data)
print(scaled_data)
The Min-Max scaler transforms the features so that they are all within the range [0, 1].
Normalization:
Normalization is the process of scaling individual data points to have a unit norm (usually L2 norm). It is
often used in scenarios where the direction or relative magnitude of data points is more important than
their absolute values.
Example:
Imagine you have a dataset of user reviews, and you want to normalize the word counts of each review
to emphasize the overall word distribution regardless of review length.
normalized_data = normalizer.transform(data)
print(normalized_data)