Professional Documents
Culture Documents
Unit-I Computational Intelligence - CS
Unit-I Computational Intelligence - CS
For the above general model of artificial neural network, the net input can be calculated as follows −
Notice that the arrows in the left plot are much longer than their counterparts in the
right plot. Clearly, the line in the right plot is a much better predictive model than the line
in the left plot.
You might be wondering whether you could create a mathematical function—a loss
function—that would aggregate the individual losses in a meaningful fashion.
• Squared loss: a popular loss function
• The linear regression models we'll examine here use a loss function
called squared loss (also known as L2 loss). The squared loss for a single
example is as follows:
= the square of the difference between the label and the prediction
= (observation - prediction(x))2
= (y - y')2
Mean square error (MSE) is the average squared loss per example over
the whole dataset. To calculate MSE, sum up all the squared losses for
individual examples and then divide by the number of examples:
Parametric Models, Nonparametric Models
Such models are called as parametric machine learning models. The parametric
models are linear models which includes determining the parameters such as
that shown above. The most common approach to fitting the above model is
referred to as ordinary least squares (OLS) method. However, least squares is
one of many possible ways to fit the linear model. Example of parametric
models include linear algorithms such as Lasso regression, linear regression
and to an extent, generalized additive models (GAMs).
• Building non-parametric models do not make explicit
assumptions about the functional form such as linear model in
case of parametric models. Instead non-parametric models
can be seen as the function approximation that gets as close
to the data points as possible. The advantage over parametric
approaches is that by avoiding the assumption of a particular
functional form such as linear model, non-parametric models
have the potential to accurately fit a wider range of possible
shapes for the actual or true function. Any parametric approach
brings with it the possibility that the functional form (linear
model) which is very different from the true function, in which
case the resulting model will not fit the data well. Example of
non-parametric models include fully non-linear algorithms
such as bagging, boosting, support vector machines bagging
boosting with non-linear kernels, and neural networks (deep
learning).
• The following is the list of differences between parametric and
non-parametric machine learning models.
• In case of parametric models, the assumption related to the
functional form is made and linear model is considered. In case of
non-parametric models, the assumption about the functional form is
not made.
• Parametric models are much easier to fit than non-parametric
models because parametric machine learning models only require
the estimation of a set of parameters as the model is identified prior
as linear model. In case of non-parametric model, one needs to
estimate some arbitrary function which is a much difficult task.
• Parametric models often do not match the unknown function we are
trying to estimate. The model performance is comparatively lower
than the non-parametric models. The estimates done by the
parametric models will be farther from being true.
• Parametric models are interpretable unlike the non-parametric
models. This essentially means that one can go for parametric
models when the goal is to find inference. Instead, one can choose to
go for non-parametric models when the goal is to make prediction
with higher accuracy and interpretability or inference is not the key
ask.
Multilayer Networks: Feed Forward network, Feedback
network
• Feed-forward network
• Feed-forward neural networks enable signals to travel one method only,
from input to output. There is no feedback (loops) i.e., the output of any
layer does not affect that same layer. Feed-forward networks influence
to be easy networks that relate inputs with outputs. They are
extensively used in pattern recognition. This type of organization is also
defined as bottom-up or top-down.
• Feed-forward neural networks enable signals to travel one method only,
from input to output. There is no feedback (loops) i.e., the output of any
layer does not affect that same layer. Feed-forward networks influence
to be easy networks that relate inputs with outputs. They are
extensively used in pattern recognition. This type of organization is also
defined as bottom-up or top-down.
• The weighted outputs of these units are fed simultaneously to the
second layer of neurons like units known as the hidden layer. The
hidden layer is weighted output which can be input to another hidden
layer and so on. The number of hidden layers is arbitrary and usually,
one is used.
• The weighted outputs of the last hidden layer are inputs to units
making up the output layer, which emits the network's prediction for
given samples. The units in the hidden layers and output layer are
defined as neurodes, because of their symbolic biological basis or as
output units. Multilayer feed-forward networks of linear threshold
functions given through hidden units can closely approximate any
function.
• Feedback Networks
• Feedback networks can have signals traveling in both areas by
learning loops on the web. Feedback networks are very dynamic and
can get extremely complex. Feedback networks are dynamic, their
states are changing continuously until they reach an equilibrium
point.
• The remains at the equilibrium point until the input changes and a
new equilibrium needs to be found. Feedback architectures are also
defined as interactive or recurrent, although the term can indicate
feedback connections in individual-layer organizations.
• When a large database is involved in increasing the accuracy of deep
neural network algorithms, a model of data production and artificial
intelligence learning for behavioral research is essential. In general,
clinical data are used when the user's disease information is
included. At this time, if the clinical data are inaccurate, the results
of the predictions are incorrect.
• When a large database is involved in increasing the accuracy of deep
neural network algorithms, a model of data production and artificial
intelligence learning for behavioral research is essential. In general,
clinical data are used when the user's disease information is
included. At this time, if the clinical data are inaccurate, the results
of the predictions are incorrect.
• Moreover, if the information about the user's behavior and activity,
other than the clinical data is not reflected, time-series data
according to the user’s situation which changes over time, must be
used as an input value to accurately predict the results.
• The feedback model for the deep neural network algorithm includes
an original feedback model and a secondary feedback model that
restate the result.