AI pastpaper solve by M.Noman Tariq (2)

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

What Machine Learning Explain its types.

Machine Learning is a subfield of artificial intelligence (AI) that focuses on the development of
algorithms and models that enable computers to learn and make predictions or decisions without being
explicitly programmed. There are three main types of machine learning:

Supervised Learning:
In supervised learning, the algorithm is trained on a labeled dataset, where each data point is paired
with the correct output or target. The algorithm learns to map input data to the correct output by
finding patterns and relationships in the labeled examples. Common applications include image
classification, spam email detection, and regression problems.

Unsupervised Learning:
Unsupervised learning involves training the algorithm on an unlabeled dataset, where there are no
predefined output labels. The goal is to discover hidden patterns, structure, or clusters in the data.
Common techniques include clustering, dimensionality reduction, and density estimation. Examples
include customer segmentation and anomaly detection.

Reinforcement Learning:
Reinforcement learning is a type of machine learning where an agent learns to make decisions by
interacting with an environment. The agent receives feedback in the form of rewards or penalties based
on its actions, and it aims to maximize its cumulative reward over time. This type of learning is often
used in robotics, game playing, and autonomous systems.

Semi-Supervised Learning:
Semi-supervised learning is a type of machine learning that combines elements of both supervised and
unsupervised learning. In semi-supervised learning, the training dataset contains a combination of
labeled and unlabeled data. The algorithm uses the small amount of labeled data to learn patterns and
then generalizes that knowledge to make predictions on the larger set of unlabeled data.

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


What's the difference between Deterministic and non-deterministic
game
Deterministic Algorithm Non-deterministic Algorithm

A deterministic algorithm is one A non-deterministic algorithm is one


whose behavior is completely in which the outcome cannot be
determined by its inputs and the predicted with certainty, even if the
sequence of its instructions. inputs are known.

For a particular input the computer


For a particular input, the computer
will give different outputs on different
will give always the same output.
execution.

Can solve the problem in polynomial Can’t solve the problem in polynomial
time. time.

Cannot determine the next step of


Can determine the next step of
execution due to more than one path
execution.
the algorithm can take.

Operation are uniquely defined. Operation are not uniquely defined.

Like linear search and binary search like 0/1 knapsack problem.

Deterministic algorithms usually have Time complexity of non-deterministic


a well-defined worst-case time algorithms is often described in terms
complexity. of expected running time.

Examples of non-deterministic
Examples of deterministic algorithms
algorithms include probabilistic
include sorting algorithms like bubble
algorithms like Monte Carlo methods,
sort, insertion sort, and selection sort,
genetic algorithms, and simulated
as well as many numerical algorithms.
annealing.

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


Explain Brut Force Algorithms
In the brute force sort technique, the data list is scanned multiple times to find the smallest element in
the list. After each iteration over the list, it replaces the smallest element at the top of the stack and
starts the next iteration from the second smallest data in the list.

Algorithm

what is an agent Explain its types.


An agent is a term commonly used in various fields, including computer science, economics, and
philosophy, to refer to an entity that acts autonomously or on behalf of another entity to achieve specific
goals or objectives. Agents are characterized by their capacity to perceive their environment, make
decisions, and take actions to accomplish tasks or optimize outcomes.

Types of Agents:
Simple Reflex Agents:
These agents make decisions based solely on the current percept or input from their environment. They
follow predefined rules or condition-action pairs and do not consider past actions or future
consequences. Simple reflex agents are limited in their ability to handle dynamic or complex
environments.

Model-Based Reflex Agents:


These agents build an internal model or representation of their environment and use it to make
decisions. They can consider the history of past perceptions and actions to choose actions that lead to
better outcomes. However, they may still have limitations in handling highly dynamic situations.

Goal-Based Agents:
Goal-based agents have explicit goals or objectives they aim to achieve. They use their internal models
and reasoning capabilities to plan a sequence of actions that will lead to the desired outcome. These
agents can adapt to changing conditions and are more flexible than reflex agents.

Utility-Based Agents:
Utility-based agents make decisions by considering not only their goals but also the expected utility or
value associated with different actions and outcomes. They prioritize actions that maximize their
expected utility. This type of agent is commonly used in economics and decision theory.

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


Learning Agents:
Learning agents have the ability to improve their performance over time through experience. They can
adapt to new environments or tasks by learning from their interactions. Machine learning and
reinforcement learning algorithms are often used to implement learning agents.

Explain any three types of Learning.

1. Supervised Learning:
Supervised learning is a machine learning paradigm in which algorithms learn patterns and relationships
from a labeled dataset. In supervised learning, the dataset used for training consists of input data paired
with corresponding output labels. The primary goal is for the algorithm to learn a mapping function that
can accurately predict or classify new, unseen data based on its input.

For example, in image classification, the algorithm is trained on a dataset of images where each image is
associated with a label (e.g., cat or dog). The algorithm learns to recognize features and patterns in the
images that are indicative of the labels. Once trained, it can classify new, unlabeled images into the
appropriate categories. Supervised learning is widely used in various applications, including natural
language processing for sentiment analysis, speech recognition, and recommendation systems.

2. Unsupervised Learning:
Unsupervised learning is a machine learning approach where algorithms analyze unlabeled data to
discover inherent structures, patterns, or relationships within the data. Unlike supervised learning, there
are no predefined output labels in unsupervised learning. Instead, the algorithm's objective is to find
meaningful groupings or representations within the data.

Two common types of unsupervised learning

Clustering:

Clustering algorithms group similar data points together based on their features.

Dimensionality Reduction:

Dimensionality reduction techniques aim to reduce the complexity of data by transforming it into a
lower-dimensional representation while preserving essential information.

3. Reinforcement Learning:
Reinforcement learning is a machine learning paradigm where an agent interacts with an environment
and learns to make a sequence of decisions to maximize a cumulative reward. In this learning process,
the agent receives feedback from the environment in the form of rewards or penalties based on its
actions. The primary goal is for the agent to discover an optimal policy (a strategy) that leads to the
highest long-term reward.

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )
What is Knowledge Presentation?

Define Objects and frames?


Objects:
Objects are like the building blocks of knowledge in AI. They represent things or entities in the world that
AI systems need to understand. These things can be physical, like a chair or a tree, or abstract, like a
concept or an idea. Objects have characteristics, or attributes, that describe them, and they can be
connected or related to other objects in various ways. For example, a "car" is an object with attributes
like "color," "brand," and "model," and it can be related to a "driver" as a person who operates it.

Frames:
Frames are structures used to organize information about objects. Think of frames as templates that
help AI systems make sense of the world. Each frame has slots, which are like labeled boxes where you

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


can put specific details or attributes about an object. For example, if you have a "Car" frame, it might
have slots for "color," "brand," and "model." When you fill in these slots with specific information, you
create a complete picture of a particular car. Frames make it easier for AI to store, retrieve, and reason
about information related to objects.

Write algorithm of means end analysis?


Step 1: Compare CURRENT to GOAL, if there are no differences between both then return Success and
Exit.

Step 2: Else, select the most significant difference and reduce it by doing the following steps until the
success or failure occurs.

Select a new operator O which is applicable for the current difference, and if there is no such operator,
then signal failure.

Attempt to apply operator O to CURRENT. Make a description of two states.

i) O-Start, a state in which O?s preconditions are satisfied.

ii) O-Result, the state that would result if O were applied In O-start.

If

(First-Part <------ MEA (CURRENT, O-START)

And

(LAST-Part <----- MEA (O-Result, GOAL), are successful, then signal Success and return the result
of combining FIRST-PART, O, and LAST-PART.

Find the values of h1, h2, o1, 02, e1, e2 and total error rate of forward
propagation in neural networks

h1,

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


h2

O1 and o2

E1 and E2

Total error Etotal=E1+E2=0.24908

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


Write names and evaluation/heuristics function two informed searching
algorithms.
A* Algorithm
A* is a graph traversal algorithm that finds the shortest path in a weighted graph by considering both the
actual cost from the start node (g(n)) and a heuristic estimate of the cost to the goal node (h(n)).

Greedy Best-First Search


Greedy Best-First Search is a heuristic search algorithm that selects nodes to explore based solely on
their estimated cost to the goal, without considering the path cost. It prioritizes nodes that seem closest
to the goal.

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


Name the data structures that are used the best first search, depth first
search and breadth first
1. Best First Search (BFS):

Best First Search uses a priority queue (min-heap) to keep track of the nodes to be explored. Nodes are
dequeued based on their heuristic or evaluation function values. The node with the lowest heuristic
value is explored first.

2. Depth First Search (DFS):

Depth First Search uses a stack (either explicitly implemented or the call stack in a recursive
implementation) to keep track of nodes. It explores a branch of the graph as deeply as possible before
backtracking.

3. Breadth First Search (BFS):

Breadth First Search uses a queue to keep track of nodes. It explores all the nodes at the current level
before moving on to the next level. This ensures that nodes at a shallower depth are explored before
deeper ones, leading to a breadth-first traversal.

Name the steps of Natural Language Processing pipelines.

Step 1: Sentence segmentation

Step 2: Word tokenization

Step 3: Stemming

Step 4: Lemmatization

Step 5: Stop word analysis

Step 6: Dependency parsing

Step 7: Part-of-speech (POS) tagging

What do you mean by Genetic algorithms? What kind of problems are


best suited for genetic algorithms?
Genetic algorithms (GAs) are a type of optimization algorithm inspired by the process of natural
selection and genetics. They are used to find approximate solutions to optimization and search problems
by mimicking the process of evolution.

kind of problems are best suited for genetic algorithms


No direct mathematical solution, Complex, multi-modal, or non-linear problems, Combinatorial
problems, Parameter optimization, Multi-objective optimization, Search spaces with many variables.

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


Briefly explain the difference between graph and a tree with Diagram

The basis of
Comparison Graph Tree

Graph is a non-linear data Tree is a non-linear data


Definition structure. structure.

It is a collection of It is a collection of nodes and


Structure vertices/nodes and edges. edges.

A tree is a type of graph that


A graph can be connected or is connected, acyclic
disconnected, can have cycles (meaning it has no cycles or
Structure or loops, and does not loops), and has a single root
cycle necessarily have a root node. node.

If there is n nodes then there


Each node can have any would be n-1 number of
Edges number of edges. edges

Types of They can be directed or


Edges undirected They are always directed

There is no unique node There is a unique node called


Root node called root in graph. root(parent) node in trees.

Loop
Formation A cycle can be formed. There will not be any cycle.

For graph traversal, we


use Breadth-First Search We traverse a tree using in-
(BFS), and Depth-First Search order, pre-order, or post-
Traversal (DFS). order traversal methods.

For finding shortest path in For game trees, decision


Applications networking graph is used. trees, the tree is used.

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


The basis of
Comparison Graph Tree

In a graph, nodes can have


any number of connections to In a tree, each node (except
other nodes, and there is no the root node) has a parent
Node strict parent-child node and zero or more child
relationships relationship. nodes.
Graph Tree

Run DFS on the tree make it yourself and update the data structure
accordingly. Also show the final output after traversal.

Depth-First Search Algorithm for Tree Traversal

1. Initialize an empty stack to keep track of nodes to be visited.

2. Start at the root node of the tree.

3. Push the root node onto the stack.

4. While the stack is not empty: a. Pop a node from the stack and process it (e.g., print its value or
update it as needed). b. Push any unvisited child nodes onto the stack, starting with the right
child (if present) and then the left child (if present). This ensures that the left child is processed
before the right child, as DFS follows the LIFO (Last-In-First-Out) order.

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


Explain uses of computer vision in Artificial intelligence
Computer vision is a field of artificial intelligence (AI) that focuses on enabling computers to interpret
and understand visual information from the world, much like humans do with their eyes and brain. It
involves developing algorithms and models that can analyze and extract meaningful insights from images
and videos. Computer vision has numerous applications across various industries and is an integral part
of AI.

uses of computer vision in artificial intelligence:


Image Classification:
Computer vision is often used to classify images into predefined categories or labels. This is a
fundamental task in many applications, such as identifying objects in photos, detecting diseases in
medical images, or categorizing products in e-commerce.

Object Detection:
Computer vision can locate and identify specific objects within an image or video stream. This is crucial
in various scenarios, including surveillance, autonomous vehicles, and robotics.

Facial Recognition:
Facial recognition technology uses computer vision to identify and verify individuals based on their facial
features. It has applications in security, access control, and user authentication.

OCR (Optical Character Recognition):


OCR technology is used to convert printed or handwritten text into machine-readable text. It finds
applications in digitizing printed documents, automating data entry, and aiding visually impaired
individuals.

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


Medical Image Analysis:
Computer vision is used in the analysis of medical images, including X-rays, MRIs, and CT scans. It can
help in disease diagnosis, treatment planning, and monitoring patient health.

Autonomous Vehicles:
Self-driving cars rely heavily on computer vision to perceive their surroundings. Cameras and sensors
capture and process visual data to make decisions about navigation and avoiding obstacles.

Augmented Reality (AR):


AR applications overlay digital information or virtual objects onto the real-world view. Computer vision is
essential for tracking and aligning virtual objects with the user's perspective.

What is the intelligent agent in AI, and where are they us?
An intelligent agent in AI is a software or hardware system that perceives its environment, processes
information, and takes actions to achieve specific goals or objectives. These agents are designed to
mimic certain aspects of human intelligence, such as problem-solving, learning, decision-making, and
adapting to changing circumstances. Intelligent agents can operate autonomously or semi-autonomously
and are a fundamental concept in artificial intelligence and robotics.

Use of AI Agent
Reinforcement Learning:
In reinforcement learning, intelligent agents learn to make decisions by interacting with an environment
and receiving feedback in the form of rewards or punishments. These agents are used in applications like
game playing (e.g., AlphaGo), robotics, and autonomous vehicles.

Expert Systems:
Expert systems are AI applications that use knowledge-based reasoning to solve complex problems in
specific domains. They are used in fields such as healthcare for medical diagnosis, in finance for
investment advice, and in engineering for troubleshooting.

Chatbots and Virtual Assistants:


Chatbots and virtual assistants, like Siri, Alexa, and Google Assistant, are intelligent agents that can
understand and respond to natural language queries, perform tasks, and provide information or services
to users.

Autonomous Robots:
Autonomous robots, such as self-driving cars and drones, use intelligent agents to navigate their
environments, make decisions, and respond to changing conditions without human intervention.

Recommendation Systems:
Online platforms like Netflix, Amazon, and Spotify use intelligent agents to analyze user data and
recommend products, movies, music, or content that aligns with individual preferences.

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


Data Analysis and Prediction:
In data science and analytics, intelligent agents can be used to analyze large datasets, make predictions,
and identify patterns or anomalies. This is useful in various industries, including finance, healthcare, and
marketing.

Natural Language Processing (NLP):


NLP agents can understand and generate human language. They are used in applications like sentiment
analysis, machine translation, and automated content generation.

Game Playing:
AI agents are employed in playing complex strategy games like chess, Go, and video games, often
competing at or above human skill levels.

Cybersecurity:
Intelligent agents are used to detect and respond to cybersecurity threats in real-time, helping
organizations protect their systems and data.

Industrial Automation:
In manufacturing and industrial settings, intelligent agents control and optimize processes, monitor
equipment health, and improve efficiency.

Financial Trading:
In the financial industry, intelligent agents execute high-frequency trading strategies and analyze market
data to make investment decisions.

Explain what is NLP? What are the various components of NLP


Natural language processing (NLP) is a branch of artificial intelligence (AI) that enables machines to
understand human language. The main intention of NLP is to build systems that are able to make sense
of text and then automatically execute tasks like spell-check, text translation, topic classification, etc.
Companies today use NLP in artificial intelligence to gain insights from data and automate routine tasks.

Text Tokenization:
Text tokenization is the process of splitting a text into smaller units, typically words or phrases, called
tokens. This step is essential for breaking down text data into manageable pieces for further analysis

Part-of-Speech Tagging (POS):


Part-of-speech tagging involves labeling each word in a sentence with its grammatical role, such as noun,
verb, adjective, adverb, etc. This information is crucial for understanding the syntax and grammatical
structure of a sentence.

Syntactic Parsing:
Syntactic parsing focuses on analyzing the grammatical structure and relationships between words in a
sentence. It typically produces a parse tree or a syntactic structure that illustrates how words are
connected in a sentence. Parsing helps in understanding the hierarchy and dependencies among words
in a sentence.

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


Semantic Analysis:
Semantic analysis goes beyond syntax and aims to understand the meaning of words and sentences. This
component allows NLP systems to infer the semantics or context of a text.

Named Entity Recognition (NER):


NER is the process of identifying and categorizing named entities in text, such as names of people,
organizations, locations, dates, and more. It's crucial for information extraction and structuring
unstructured text data.

Coreference Resolution:
Coreference resolution deals with identifying when different words or phrases in a text refer to the same
entity. It helps maintain context and coherence in text understanding.

The heuristic value of all states is given in the below tale so you will
traverse the given graph using the A* algorithm

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


In artificial intelligence, we explored the components of a machine learning pipeline. A critical
component of the pipeline is deciding which features will be used as inputs to the model. For many
models, a small subset of the input variables provides the lion's share of the predictive ability. In most
datasets, it is common for few features to be responsible for the majority of the information signal and
the rest of the features are just mostly noise. It is important to lower the amount of input features.

Describe variety of reasons to justify attribute selection.

There are reasons to justify attribute selection


Improved Model Performance:
Selecting the most relevant features can lead to a simpler and more accurate model. When you use only
the essential attributes, your model may perform better because it focuses on the most important
information.

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


Reduced Overfitting:
Using fewer attributes reduces the risk of overfitting, where a model learns noise in the data rather than
the underlying patterns. A simpler model is less likely to make predictions based on random fluctuations.

Faster Training and Inference:


With fewer features, the training process and making predictions with your model become faster. This is
especially important when working with large datasets or real-time applications.

Improved Interpretability:
Simpler models are often easier to interpret and explain to stakeholders. Selecting relevant attributes
can make it clearer how the model is making predictions.

Data Quality and Noise Reduction:


Removing irrelevant or noisy features can enhance the quality of your data, making it more reliable for
training and improving model robustness.

Simplification of Deployment:
Models with fewer attributes are generally easier to deploy in production systems. This reduces the
complexity of integration and maintenance.

Cost Reduction:
Collecting and processing data can be costly. By selecting only the most important features, you can save
resources and reduce the expenses associated with data collection and storage.

Raw data is the fuel of machine learning algorithms. But just like we
cannot put crude oil into a car and instead we must use gasoline,
machine learning algorithms expect data to be formatted in a certain
way before the training process can begin. In order to prepare the data
for ingestion by machine learning algorithms, the data must be
preprocessed and converted into the right format. Explain following
preprocessing techniques with at-least one example each

1. Binarization
2. Mean removal
3. Scaling
4. Normalization
Data preprocessing is a critical step in preparing raw data for machine learning algorithms. These
techniques help make the data more suitable for model training by addressing issues such as varying
scales, outliers, and non-standard formats. Let's discuss each of the mentioned preprocessing techniques
with examples:

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


Binarization:
Binarization is the process of converting numerical data into binary values (0 or 1) based on a specified
threshold. It is commonly used for feature engineering when you want to turn continuous data into
categorical data.

Example:
Suppose you have a dataset of temperature values in Celsius, and you want to convert it into a binary
format where temperatures above 25 degrees Celsius are considered hot (1) and temperatures below or
equal to 25 degrees Celsius are considered not hot (0).

import numpy as np

temperatures = np.array([22, 28, 19, 30, 23, 27])

threshold = 25

hot_or_not = (temperatures > threshold).astype(int)

print(hot_or_not)

In this example, temperatures above 25°C are converted to 1 (hot), and temperatures equal to or below
25°C are converted to 0 (not hot).

Mean Removal (Centering):


Mean removal, also known as centering or mean subtraction, involves subtracting the mean (average) of
a feature from each data point. This helps in eliminating the bias from the data and centers it around
zero.

Example:

Let's say you have a dataset of exam scores, and you want to remove the mean score from each
student's exam score to make the data centered around zero.

import numpy as np

exam_scores = np.array([85, 92, 78, 95, 88])

mean_score = np.mean(exam_scores)

centered_scores = exam_scores - mean_score

print(centered_scores)

In this example, subtracting the mean score from each exam score results in a dataset where the mean is
approximately zero.

Scaling:
Scaling involves transforming the numerical values of a feature to fit within a specific range or scale. It
helps prevent features with larger scales from dominating the learning process and ensures that all
features contribute equally to the model.

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


Example:
Let's say you have a dataset of house prices with features like square footage and number of bedrooms.
You want to scale these features to a range between 0 and 1.

from sklearn.preprocessing import MinMaxScaler

data = np.array([[1500, 3], [2000, 4], [1200, 2], [2500, 5]])

scaler = MinMaxScaler()

scaled_data = scaler.fit_transform(data)

print(scaled_data)

The Min-Max scaler transforms the features so that they are all within the range [0, 1].

Normalization:
Normalization is the process of scaling individual data points to have a unit norm (usually L2 norm). It is
often used in scenarios where the direction or relative magnitude of data points is more important than
their absolute values.

Example:
Imagine you have a dataset of user reviews, and you want to normalize the word counts of each review
to emphasize the overall word distribution regardless of review length.

from sklearn.preprocessing import Normalizer

data = np.array([[2, 3, 4], [1, 0, 1], [6, 8, 10]])

normalizer = Normalizer(norm='l2') # L2 normalization

normalized_data = normalizer.transform(data)

print(normalized_data)

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


What are the decision tree? Why, when, how and where we use them,
explain with example.

MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )


MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )
MR.NOMAN.TARIQ@OUTLOOK.COM 0309-6054532. (IF FIND ANY MISTAKE CONTACT ME )

You might also like