Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Activation Functions in Neural Networks

A paradigm for information processing that draws inspiration from the


brain is called an artificial neural network (ANN). ANNs learn via
imitation just like people do. Through a learning process, an ANN is
tailored for a particular purpose, including such pattern classification or
data classification. The synapses interconnections that exist between both
the neurons change because of learning.
What input layer to employ with in hidden layer and at the input level of
the network is one of the decisions you get to make while creating a
neural network. This article discusses a few of the alternatives.
The nerve impulse in neurology serves as a model for activation functions
within computer science. A chain reaction permits a neuron to "fire" and
send a signal to nearby neurons if the induced voltage between its interior
and exterior exceeds a threshold value known as the action potential. The
next series of activations, known as a "spike train," enables motor
neurons to transfer commands from of the brain to the limbs and sensory
neurons too transmit sensation from the digits to the brain.

Neural Network Components

Layers are the vertically stacked parts that make up a neural network. The
image's dotted lines each signify a layer. A NN has three different types
of layers.

Input Layer

The input layer is first. The data will be accepted by this layer and
forwarded to the remainder of the network. This layer allows feature
input. It feeds the network with data from the outside world; no
calculation is done here; instead, nodes simply transmit the information
(features) to the hidden units.

Hidden Layer

Since they are a component of the abstraction that any neural network
provides, the nodes in this layer are not visible to the outside world. Any
features entered through to the input layer are processed by the hidden
layer in any way, with the results being sent to the output layer. The
concealed layer is the name given to the second kind of layer. For a
neural network, either there are one or many hidden layers. The number
inside the example above is 1. In reality, hidden layers are what give
neural networks their exceptional performance and intricacy. They carry
out several tasks concurrently, including data transformation and
automatic feature generation.

Output Layer

This layer raises the knowledge that the network has acquired to the
outside world. The output layer is the final kind of layer The output layer
contains the answer to the issue. We receive output from the output layer
after passing raw photos to the input layer.
Data science makes extensive use of the rectified unit (ReLU) functional
or the category of sigmoid processes, which also includes the logistic
regression model, logistic hyperbolic tangent, and arctangent function.

Activation Function

Definition

In artificial neural networks, an activation function is one that outputs a


smaller value for tiny inputs and a higher value if its inputs are greater
than a threshold. An activation function "fires" if the inputs are big
enough; otherwise, nothing happens. An activation function, then, is a
gate that verifies how an incoming value is higher than a threshold value.
Because they introduce non-linearities in neural networks and enable the
neural networks can learn powerful operations, activation functions are
helpful. A feedforward neural network might be refactored into a
straightforward linear function or matrix transformation on to its input if
indeed the activation functions were taken out.
By generating a weighted total and then including bias with it, the
activation function determines whether a neuron should be turned on. The
activation function seeks to boost a neuron's output's nonlinearity.
Explanation: As we are aware, neurons in neural networks operate in
accordance with weight, bias, and their corresponding activation
functions. Based on the mistake, the values of the neurons inside a neural
network would be modified. This process is known as back-propagation.
Back-propagation is made possible by activation functions since they
provide the gradients and error required to change the biases and weights.

Need of Non-linear Activation Functions

An interconnected regression model without an activation function is all


that a neural network is. Input is transformed nonlinearly by the
activation function, allowing the system to learn and perform more
challenging tasks.
It is merely a thing procedure that is used to obtain a node's output. It also
goes by the name Transfer Function.
The mixture of two linear functions yields a linear function, so no matter
how several hidden layers we add to a neural network, they all will
behave in the same way. The neuron cannot learn if all it has is a linear
model. It will be able to learn based on the difference with respect to error
with a non-linear activation function.
The mixture of two linear functions yields a linear function in itself, so no
matter how several hidden layers we add to a neural network, they all will
behave in the same way. The neuron cannot learn if all it has is a linear
model.
The two main categories of activation functions are:

o Linear Activation Function


o Non-linear Activation Functions

Linear Activation Function

As can be observed, the functional is linear or linear. Therefore, no region


will be employed to restrict the functions' output.

The normal data input to neural networks is unaffected by the complexity


or other factors.
Non-linear Activation Function

The normal data input to neural networks is unaffected by the complexity


or other factors.

Activation Function

o Linear Function

Equation: A linear function's equation, which is y = x, is similar to the


eqn of a single direction.
The ultimate activation function of the last layer is nothing more than a
linear function of input from the first layer, regardless of how many levels
we have if they are all linear in nature. -inf to +inf is the range.
Uses: The output layer is the only location where the activation function's
function is applied.
If we separate a linear function to add non-linearity, the outcome will no
longer depend on the input "x," the function will become fixed, and our
algorithm won't exhibit any novel behaviour.
A good example of a regression problem is determining the cost of a
house. We can use linear activation at the output layer since the price of a
house may have any huge or little value. The neural network's hidden
layers must perform some sort of non-linear function even in this
circumstance.
o Sigmoid Function

It is a functional that is graphed in a "S" shape.


A is equal to 1/(1 + e-x).
Non-linear in nature. Observe that while Y values are fairly steep, X
values range from -2 to 2. To put it another way, small changes in x also
would cause significant shifts in the value of Y. spans from 0 to 1.
Uses: Sigmoid function is typically employed in the output nodes of a
classi?cation, where the result may only be either 0 or 1. Since the value
for the sigmoid function only ranges from 0 to 1, the result can be easily
anticipated to be 1 if the value is more than 0.5 and 0 if it is not.
o Tanh Function

The activation that consistently outperforms sigmoid function is known


as tangent hyperbolic function. It's actually a sigmoid function that has
been mathematically adjusted. Both are comparable to and derivable from
one another.
Range of values: -1 to +1. non-linear nature
Uses: - Since its values typically range from -1 to 1, the mean again for
hidden layer of a neural network will be 0 or very near to it. This helps to
centre the data by getting the mean close to 0. This greatly facilitates
learning for the following layer.
Equation:
max A(x) (0, x). If x is positive, it outputs x; if not, it outputs 0.
Value Interval: [0, inf]
Nature: non-linear, which allows us to simply backpropagate the mistakes
and have the ReLU function activate many layers of neurons.
Uses: Because ReLu includes simpler mathematical processes than tanh
and sigmoid, it requires less computer time to run. The system is sparse
and efficient for computation since only a limited number of neurons are
activated at any given time.
Simply said, RELU picks up information considerably more quickly than
sigmoid and Tanh functions.
o ReLU (Rectified Linear Unit) Activation Function

Currently, the ReLU is the activation function that is employed the most
globally. Since practically all convolutional neural networks and deep
learning systems employ it.
The derivative and the function are both monotonic.
However, the problem is that all negative values instantly become zero,
which reduces the model's capacity to effectively fit or learn from the
data. This means that any negative input to a ReLU activation function
immediately becomes zero in the graph, which has an impact on the final
graph by improperly mapping the negative values.

o Softmax Function

Although it is a subclass of the sigmoid function, the softmax function


comes in handy when dealing with multiclass classification issues.
Used frequently when managing several classes. In the output nodes of
image classification issues, the softmax was typically present. The
softmax function would split by the sum of the outputs and squeeze all
outputs for each category between 0 and 1.
The output unit of the classifier, where we are actually attempting to
obtain the probabilities to determine the class of each input, is where the
softmax function is best applied.

When working on machine learning problems, specifically, deep


learning tasks, Softmax activation function is a popular name. It is
usually placed as the last layer in the deep learning model.

It is often used as the last activation function of a neural


network to normalize the output of a network to a
probability distribution over predicted output classes.

Softmax is an activation function that scales numbers/logits into


probabilities. The output of a Softmax is a vector (say v) with
probabilities of each possible outcome. The probabilities in
vector v sums to one for all possible outcomes or classes.

Mathematically, Softmax is defined as,


Example

Consider a CNN model which aims at classifying an image as either


a dog, cat, horse or cheetah (4 possible outcomes/classes). The last
(fully-connected) layer of the CNN outputs a vector of logits, L, that
is passed through a Softmax layer that transforms the logits into
probabilities, P. These probabilities are the model predictions for each
of the 4 classes.
Input image source: Photo by Victor Grabarczyk on Unsplash .
Diagram by author.

Let us calculate the probability generated by the first logit after


Softmax is applied

You can calculate the other values in the same manner.


In python, we can implement Softmax as follows

from math import expdef softmax(input_vector):


# Calculate the exponent of each element in the input vector
exponents = [exp(j) for j in input_vector] # divide the exponent
of each value by the sum of the
# exponents and round of to 3 decimal places
p = [round(exp(i)/sum(exponents),3) for i in input_vector] return
pprint(softmax([3.2,1.3,0.2,0.8]))

Output:

[0.775, 0.116, 0.039, 0.07]

Notation: We can represent all the logits as a vector, v, and apply the
activation function, S, on this vector to output the probabilities
vector, p, and represent the operation as follows

Note that the labels: dog, cat, horse and cheetah are in string format.
We need to define a way in which we represent these values as
numerical values.

Categorical Data into Numerical Data


The truth labels are categorical data: any particular image can be
categorized into one of these groups: dog, cat, horse or cheetah. The
computer however does not understand this kind of data and
therefore we need to convert them into numerical data. There are two
ways to do so:

1. Integer encoding

2. One-hot encoding

Integer Encoding (Also called Label Encoding)

In this kind of encoding, labels are assigned unique integer values.


For example in our case, we will have,

0 for “dog”, 1 for “cat”, 2 for “horse” and 3 for “cheetah”.

When to use integer encoding: It is used when the labels are ordinal
in nature, that is, labels with some order, for example, consider a
classification problem where we want to classify a service as either
poor, neutral or good, then we can encode these classes as follows

0 for “poor”, 1 for “neutral” and 2 for “good”.

Clearly, the labels have some order and the labels gives the weights
to the labels accordingly.

Conversely, we refrain from using integer encoding when the labels


are nominal (names without specific ordering), for example, consider
the flower classification problem where we have 3 classes: Iris
setosa, Iris versicolor and Iris virginica,

0 for “Iris setosa”, 1 for “Iris versicolor” and 2 for “Iris virginica”.

The model may take a natural ordering of the labels (2>1>0) and give
more weight to one class over another when in fact these are just
labels with no specific ordering implied.

One-hot encoding
For categorical variables where no such ordinal relationship exists,
the integer encoding is not enough. One-hot encoding is preferred.

In one-hot encoding, the labels are represented by a binary variable


(1 and 0s) such that for a given class a binary variable with 1 for
position corresponding to that specific class and 0 elsewhere is
generated, for example, in our case we will have the following labels
for our 4 classes

[1,0,0,0] for “dog”, [0,1,0,0] for “cat”, [0,0,1,0] for “horse”


and [0,0,0,1] for “cheetah”.

Remark: Despite the fact that we have answered the question of


“when to use which type of encoding system?”. There is a way of
actually using any kind of encoding method. For TensorFlow and
Keras, this depends on how you define your loss function. We will
get to this later.

Recall: The denominator of Softmax function is a normalization


term. It ensures that the output of the function is a value between 0
and 1.

But one may ask, why not use standard normalization, that is, take
each logit and divide it by the sum of the all logits to get the
probabilities? Why take the exponents? Here are some two reasons.

● Softmax normalization reacts to small and large variation/change


differently but standard normalization does not differentiate the
stimulus by intensity so longest the proportion is the same, for
example,
# Softmax normalization

softmax([2,4]) = [0.119, 0.881]

softmax([4,8]) = [0.018, 0.982]

# Standard normalization

def std_norm(input_vector):
p = [round(i/sum(input_vector),3) for i in input_vector]
return p

std_norm([2,4]) = [0.333, 0.667]

std_norm([4,8]) = [0.333, 0.667]

Notice the difference? for standard normalization, a vector and the


same vector scaled by a scalar yields the same output. For above
case, the first vector [2,4] was multiplied by 2 to yield [4,8] and both
of them yield the same output. With the same reasoning, the
following pairs will yield the same output: {[8,24], [2.4, 7.199]} for
scale factor of 0.3. In fact, any vector scaled by a factor yields the
same output as the original vector.

● Another problem arises when there are negative values in the


logits. In that case, you will end up with negative probabilities in
the output. The Softmax is not affected with negative values
because exponent of any value (positive or negative) is always a
positive value.

The usual rule of thumb is to utilise RELU, which is a usual perceptron in


hidden layers and is employed in the majority of cases these days, if we
really are unsure of what encoder to apply.
A very logical choice for the output layer is the sigmoid function if your
input is for binary classification. If our output involves multiple classes,
Softmax can be quite helpful in predicting the odds for each class.

You might also like