Professional Documents
Culture Documents
Quiz 4 5 6
Quiz 4 5 6
This means that it will decide whether the neuron’s input to the network is
operations.
The role of the Activation Function is to derive output from a set of input values
• Getting Dataset
• Importing Libraries
• Importing Dataset
• Feature Scaling
A decision tree is a supervised learning algorithm that is used for classification and
regression modelling. Regression is a method used for predictive modeling, so
these trees are used to either classify data or predict what will come next
4. What is Feature Scaling?
Ans: Feature Scaling is a technique to standardize the independent features
present in the data in a fixed range. It is performed during the data pre-processing
to handle highly varying magnitudes or value
Ans: The sigmoid function is used for binary classification. The probabilities sum
needs to be 1. Whereas SoftMax function is used for multi-classification. The
probabilities sum will be 1.
Ans: The simplest way to split the modelling dataset into training and testing sets
is to assign 2/3 data points to the former and the remaining one-third to the
latter. Therefore, we train the model using the training set and then apply the
model to the test set. In this way, we can evaluate the performance of our model.
Ans:
Ans: For a better predictive model, the categorical variable can be considered as
a continuous variable only when the variable is ordinal in nature.
Quiz #5
Ans: A real-world data generally contains noises, missing values, and maybe in an
unusable format which cannot be directly used for machine learning models. Data
preprocessing is required tasks for cleaning the data and making it suitable for a
machine learning model which also increases the accuracy and efficiency of a
machine learning model.
• Importing libraries
• Importing datasets
• Feature scaling
A dataset contains a huge number of input features in various cases, which makes
the predictive modeling task more complicated. Because it is very difficult to
visualize or make predictions for the training dataset with a high number of
features, for such cases, dimensionality reduction techniques are required to use.
4.Why use Unsupervised Learning?
Ans: Below are some main reasons which describe the importance of
Unsupervised Learning:
• Unsupervised learning is helpful for finding useful insights from the data.
5.What is Clustering?
Ans: Clustering is the process of grouping a set of objects into a number of groups.
Objects should be similar to one another within the same cluster and dissimilar to
those in other clusters. A few types of clustering are:
Hierarchical clustering
K means clustering
Ans: The Naïve Bayes algorithm is comprised of two words Naïve and Bayes,
Which can be described as:
Ans: Before starting the working principle of gradient descent, we should know
some basic concepts to find out the slope of a line from linear regression. The
equation for simple linear regression is given as:
Y=mX+c
Where 'm' represents the slope of the line, and 'c' represents the intercepts on
the y-axis.
Ans: Root Node: Root node is from where the decision tree starts. It represents
the entire dataset, which further gets divided into two or more homogeneous
sets.
Leaf Node: Leaf nodes are the final output node, and the tree cannot be
segregated further after getting a leaf node.
Splitting: Splitting is the process of dividing the decision node/root node into
subnodes according to the given conditions.
Pruning: Pruning is the process of removing the unwanted branches from the
tree.
Parent/Child node: The root node of the tree is called the parent node, and other
nodes are called the child nodes.
Ans: There are mainly two types of Feature Selection techniques, which are:
• Supervised Feature Selection technique
Supervised Feature selection techniques consider the target variable and can be
used for the labelled dataset.
Unsupervised Feature selection techniques ignore the target variable and can be
used for the unlabelled dataset.
Quiz# 6
• Decision Trees
• Probabilistic Networks
• Neural Networks
Ans: The process of choosing models among diverse mathematical models, which
are used to define the same data is known as Model Selection. Model learning is
applied to the fields of statistics, data mining, and machine learning.
Ans:
6.What is SVM in machine learning? What are the classification methods that
SVM can handle?
Ans: SVM stands for Support Vector Machine. SVM are supervised learning
models with an associated learning algorithm which analyze the data used for
classification and regression analysis.
Ans: Parametric models will have limited parameters and to predict new data, you
only need to know the parameter of the model.
Ans:
1. Regression
• Linear Regression
• Regression Trees
• Non-Linear Regression
• Polynomial Regression
2. Classification
Classification algorithms are used when the output variable is categorical, which
means there are two classes such as Yes-No, Male-Female, True-false, etc.
• Spam Filtering,
• Random Forest
• Decision Trees
• Logistic Regression
Ans: With the help of supervised learning, the model can predict the output on
the basis of prior experiences.
In supervised learning, we can have an exact idea about the classes of objects.