Professional Documents
Culture Documents
AIML End 1
AIML End 1
dependent (y) and one or more independent (y) variables, hence called
as linear regression. Since linear regression shows the linear
relationship, which means it nds how the value of the dependent
variable is changing according to the value of the independent variable.
>The linear regression model provides a sloped straight line
representing the relationship between the variables. Consider the below
image:
Linear Regression in Machine Learning
Mathematically, we can represent a linear regression as:
y= a0+a1x+ ε
Here,
Y= Dependent Variable (Target Variable)
X= Independent Variable (predictor Variable)
a0= intercept of the line (Gives an additional degree of freedom) a1 =
Linear regression coe cient (scale factor to each input value).
ε = random error seyra narsimha reddy
Types of Linear Regression
Linear regression can be further divided into two types of the algorithm:
Simple Linear Regression:
If a single independent variable is used to predict the value of a
numerical dependent variable, then such a Linear Regression algorithm
is called Simple Linear Regression.
Mul?ple Linear regression:
If more than one independent variable is used to predict the value of a
numerical dependent variable, then such a Linear Regression algorithm
is called Multiple Linear Regression.
Linear Regression Line
A linear line showing the relationship between the dependent and
independent variables is called a regression line. A regression line can
show two types of relationship:
The Working process can be explained in the below steps and diagram:
Step-2: Build the decision trees associated with the selected data
points (Subsets).
Step-3: Choose the number N for decision trees that you want to build.
fi
ti
ti
fi
ffi
fi
fi
fi
ti
ti
fi
fi
fi
fi
fi
fi
fi
ti
ti
fi
Linear Discriminant Analysis (LDA) in Machine Learning
Linear Discriminant Analysis (LDA) is one of the commonly used
dimensionality reduction techniques in machine learning to solve more
than two-class classi cation problems. It is also known as Normal
Discriminant Analysis (NDA) or Discriminant Function Analysis (DFA).
Bagging Boosting
Various training data subsets are randomly drawn with replacement from the
whole training dataset. Each new subset contains the components that
were misclassi ed by previous models.
Bagging attempts to tackle the over- tting issue. Boosting tries to reduce
bias.
It is the easiest way of connecting predictions that belong to the same type.
It is a way of connecting predictions that belong to the di erent types.