Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

HILL CLIMBING ALGORITHM

INTRODUCTION:
• Hill climbing algorithm is a local search algorithm which continuously moves in the direction of
increasing elevation/value to find the peak of the mountain or best solution to the problem.
• It terminates when it reaches a peak value where no neighbor has a higher value.
• It is also called greedy local search.
• Hill Climbing is mostly used when a good heuristic is available.
• Hill Climbing is a simple and intuitive algorithm that is easy to understand and implement.
• In this algorithm, we don't need to maintain and handle the search tree or graph as it only keeps a
single current state.

ALGORITHM AND HOW IT WORKS

PROBLEMS IN HILL CLIMBING


1. LOCAL MAXIMA 2. FLAT MAXIMA
2. RIDGE

EXAMPLE : 8-PUZZLE PROBLEM USING HILL CLIMBING ALGORITHM


Best First Search Algorithm (GREEDY SEARCH)

INTRODUCTION
 Greedy best-first search algorithm always selects the path which appears best at that moment.
 It is the combination of depth-first search and breadth-first search algorithms.
 It uses the heuristic function and search.
 Best-first search allows us to take the advantages of both algorithms. With the help of best-first
search, at each step, we can choose the most promising node.
 In the best first search algorithm, we expand the node which is closest to the goal node and the
closest cost is estimated by heuristic function
f(n)= g(n)
Were, h(n)= estimated cost from node n to the goal

ALGORITHM
• Step 1: Place the starting node into the OPEN list.
• Step 2: If the OPEN list is empty, Stop and return failure.
• Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and places it in
the CLOSED list.
• Step 4: Expand the node n, and generate the successors of node n.
• Step 5: Check each successor of node n, and find whether any node is a goal node or not. If any
successor node is goal node, then return success and terminate the search, else proceed to Step 6.
• Step 6: For each successor node, algorithm checks for evaluation function f(n), and then check if the
node has been in either OPEN or CLOSED list. If the node has not been in both list, then add it to the
OPEN list.
• Step 7: Return to Step 2.

ADVANTAGES
• Best first search can switch between BFS and DFS by gaining the advantages of both the algorithms.
• This algorithm is more efficient than BFS and DFS algorithms.
• Greedy search is straightforward to implement and understand. It involves selecting the best
available option
• Greedy search typically requires less memory compared to algorithms like BFS, which need to store
the entire search tree. Since greedy search only keeps track of the current path and the best choice at
each step, it is more memory-efficient
• Greedy search can be easily customized by adjusting the heuristic function or the criteria used to
select the best option at each step
DISADVANTAGES
• It can behave as an unguided depth-first search in the worst case scenario.
• It can get stuck in a loop as DFS.
• This algorithm is not optimal
• Since greedy search not consider the long-term consequences of its choices, it may miss out on better
solutions that require sacrificing short-term gains.
• Greedy search does not perform backtracking, meaning it does not revisit or reconsider previously
made decisions.
• Greedy search is not guaranteed to find a solution, even if one exists. It may terminate prematurely
without finding a solution

EXAMPLE
Considering the below search problem, and we will traverse it using greedy best-first search. At each
iteration, each node is expanded using evaluation function f(n)=h(n) , which is given in the below table.

we are using two lists which are OPEN and CLOSED Lists. Following are the iteration for traversing this
example

Expand the nodes of S and put in the CLOSED list Initialization: Open [A, B], Closed [S]
Iteration 1: Open [A], Closed [S, B]
Iteration 2: Open [E, F, A], Closed [S, B]
: Open [E, A], Closed [S, B, F]
Iteration 3: Open [I, G, E, A], Closed [S, B, F]
: Open [I, E, A], Closed [S, B, F, G]
Hence the final solution path will be: S---> B--->F----> G

COMPARISON
• Time Complexity: The worst case time complexity of Greedy best first search is O(bm).
• Space Complexity: The worst case space complexity of Greedy best first search is O(b m). Where, m
is the maximum depth of the search space.
• Complete: Greedy best-first search is also incomplete, even if the given state space is finite.
• Optimal: Greedy best first search algorithm is not optimal.

CONCLUSION
In summary, Greedy Search offers a simple and efficient approach to problem-solving, optimal choices at
each step. While it can provide fast solutions, it may overlook globally optimal paths and lacks the ability to
backtrack, potentially leading to suboptimal outcomes. Despite its limitations , Its simplicity and
effectiveness make it a versatile algorithm in various real-world applications
LEARNING THEORY

INTRODUCTION
• Learning is a fundamental process in which knowledge is acquired, and new ideas or concepts are
constructed based on experiences.
• In the context of machine learning, Tom Mitchell's definition states that a program learns from
experience (E) for a task (T), and its performance (P) improves with that experience.
There are two types of problems: well-posed and ill-posed. Computers excel at solving well-posed problems
because they have well-defined specifications with inherent components:
1. Class of Learning Tasks (T): Specifies the nature of the task to be performed.
2. Measure of Performance (P): Defines how well the system is performing the task.
3. Source of Experience (E): Provides the system with the necessary data and experiences to learn from.
In the learning process, (x) represents the input, (X) is the input space, and (Y) is the output space, which
encompasses all possible outputs (e.g., yes/no). The dataset (D) contains (n) inputs, and the target function
maps input (X) to output (Y).
• Objective: The goal is to select a function g: X ->Y, which represents an appropriate hypothesis (f).
• Learning Environment: The learning environment involves the interaction between the system and
its surroundings, as depicted in the diagram.

• Learning Model: The learning model consists of:-


• Hypothesis Set: The set of possible hypotheses or functions that the learning algorithm can choose
from.
• Learning Algorithm: The mechanism that selects the best hypothesis based on the provided data
and experiences.
Design a Learning System in Machine Learning:
Step 1: Choosing the Training Experience.
Attributes Impacting Success or Failure: The success of a machine learning model heavily relies on the
quality and relevance of the training data. Direct or indirect feedback in the training experience is crucial
for the algorithm to learn.
Step 2: Choosing the Target Function
Definition: The target function represents the goal the algorithm is trying to achieve. In your example,
it's the Next Move function in chess.
Significance: The choice of the target function defines the objective of the learning process, such as
making optimal moves in a game.
Step 3: Choosing Representation for Target Function
Representation Options: Linear equations, hierarchical graph representation, tabular forms are examples
of how the target function can be represented Optimization: The representation chosen should facilitate
the optimization of moves based on the learned target function.
Step 4: Choosing Function Approximation Algorithm
Purpose: Function approximation algorithms help the model generalize from training data to make
predictions on new, unseen data.
Learning from Examples :The algorithm learns from a set of examples, failures, and successes, refining
its understanding of the target function.
Step 5: Final Design
Iteration and Learning: The final design emerges after the system has gone through numerous examples,
failures, and successes. It involves iterating on the model's understanding based on feedback.
Example: The mention of Deep Blue winning against Garry Kasparov showcases a real-world
application where machine learning excels in strategic decision-making
Candidate Elimination Algorithm

What is Candidate Elimination Algorithm in Machine Learning?


• The Candidate Elimination Algorithm is a Machine Learning Algorithm used for concept learning
and hypothesis space search in the context of content Classification
• version Space: It is an intermediate of general hypothesis and Specific hypothesis. It not only just
writes one hypothesis but a set of all possible hypotheses based on training dataset

Algorithm:
Step1: Load Data set
Step2: Initialize General Hypothesis and Specific Hypothesis.
Step3: For each training example
Step4: If example is positive example
if attribute_value == hypothesis_value:
Do nothing
else:
replace attribute value with '?' (Basically generalizing it)
Step5: If example is Negative example
Make generalize hypothesis more specific.
.

Step 1: Importing the dataset

Step 2: Initialize the Given General Hypothesis ‘G’ and Specific Hypothesis ‘S’.
S={Φ,Φ,Φ,Φ,Φ,Φ} Because Six instance of enjoy sport are given in the dataset.
G={?,?,?,?,?,?} Because Six instance of enjoy sport are given in the dataset.

Step 3: The Final Hypothesis For S and G


G = [[‘sunny’, ?, ?, ?, ?, ?], [?, ‘warm’, ?, ?, ?, ?]]
S = [‘sunny’,’warm’,?,’strong’, ?, ?]
REGRESSION

INTRODUCTION
 Regression analysis is a statistical method to model the relationship between a dependent (target) and
independent (predictor) variables with one or more independent variables.
 Regression is a supervised learning technique which helps in finding the correlation between
variables. It is mainly used for prediction, forecasting, time series modelling, and determining the
causaleffect relationship between variables.
 Regression shows a line or curve that passes through all the datapoints on target-predictor graph in
such a way that the vertical distance between the datapoints and the regression line is minimum."
The distance between datapoints and line tells whether a model has captured a strong relationship or
not.

Function of regression analysis is given by:


Y=f(x)

Here, y is called dependent variable and x is called independent variable.

Applications of Regression Analysis


 Sales of a goods or services
 Value of bonds in portfolio management
 Premium on insurance componies
 Yield of crop in agriculture
 Prices of real estate
Types of Regression

Linear Regression: Single Independent Variable: Linear regression, also known as simple linear
regression, is used when there is a single independent variable (predictor) and one dependent
variable (target).
Equation: The linear regression equation takes the form: Y = β0 + β1X + ε, where Y is the
dependent variable, X is the independent variable, β0 is the intercept, β1 is the slope (coefficient),
and ε is the error term.
Multiple Regression: Multiple regression, as the name suggests, is used when there are two or more
independent variables (predictors) and one dependent variable (target). Equation: The multiple
regression equation extends the concept to multiple predictors: Y = β0 + β1X1 + β2X2 + ... + βnXn +
ε, where Y is the dependent variable, X1, X2, ..., Xn are the independent variables, β0 is the
intercept, β1, β2, ..., βn are the coefficients, and ε is the error term.
Polynomial Regression:
Use: Polynomial regression is an extension of multiple regression used when the relationship
between the independent and dependent variables is non-linear.
Equation: The polynomial regression equation allows for higher-order terms, such as quadratic or
cubic terms: Y = β0 + β1X + β2X^2 + ... + βnX^n + ε. This allows the model to fit a curve rather
than a straight line.
Logistic Regression:
Use: Logistic regression is used when the dependent variable is binary (0 or 1). It models the
probability of the dependent variable belonging to a particular class.
Equation: Logistic regression uses the logistic function (sigmoid function) to model probabilities:
P(Y=1) = 1 / (1 + e^(-z)), where z is a linear combination of the independent variables: z = β0 +
β1X1 + β2X2 + ... + βnXn. It transforms this probability into a binary outcome.
Lasso Regression (L1 Regularization):
Use: Lasso regression is used for feature selection and regularization. It penalizes the absolute values
of the coefficients, which encourages sparsity in the model.
Objective Function: Lasso regression adds an L1 penalty to the linear regression loss function:
Lasso = RSS + λΣ|βi|, where RSS is the residual sum of squares, λ is the regularization strength, and |
βi| represents the absolute values of the coefficients. Ridge Regression (L2 Regularization):
Use: Ridge regression is used for regularization to prevent overfitting in multiple regression. It
penalizes the square of the coefficients.
Objective Function: Ridge regression adds an L2 penalty to the linear regression loss function:
Ridge = RSS + λΣ(βi^2), where RSS is the residual sum of squares, λ is the regularization strength,
and (βi^2) represents the square of the coefficients
ADVANTAGES
• Prediction Accuracy: Regression models can provide accurate predictions of continuous variables,
allowing for better decision-making in various domains such as finance, healthcare, and marketing.
• Interpretability: Unlike some complex machine learning algorithms, regression models are often
more interpretable, making it easier to understand the relationship between input variables and the
target variable.
• Scalability: Regression techniques can scale well to large datasets, making them suitable for
analyzing and predicting outcomes from massive amounts of data.
• Flexibility: Regression can accommodate different types of input variables, including numerical,
categorical, and binary variables, making it versatile for various types of data.
• Feature Importance: Regression models can help identify the most influential features or variables
that affect the outcome, providing insights into the underlying factors driving the predictions.
• Assumption Testing: Regression analysis allows for the testing of assumptions such as linearity,
homoscedasticity, and normality, which can help ensure the validity of the model and the reliability
of the predictions.
DISADVANTAGES
• Sensitivity to Outliers: Regression models can be sensitive to outliers in the data, leading to skewed
predictions or biased parameter estimates.
• Assumption of Linearity: Many regression techniques assume a linear relationship between the
input variables and the target variable.
• Overfitting: Complex regression models with a large number of features or high polynomial degrees
are prone to overfitting, where the model learns noise in the training data rather than the underlying
patterns
• Limited Expressiveness: Linear regression and other traditional regression techniques have limited
expressiveness compared to more complex machine learning algorithms like decision trees, neural
networks, or ensemble methods.
• Difficulty Handling Categorical Data: Traditional regression techniques struggle to handle
categorical variables directly.

LINEAR REGRESSION
Linear regression model can be created by fitting a line among the scattered data points. The line is
of the form:

EXAMPLE

You might also like