Hyperbolic Tangent Activation Function Integrated Circuit Implementation

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 13

Hyperbolic tangent activation function

integrated circuit implementation

Project Mentor: Group Members:

Dr. Sanjai Singh


Lakshya Kumar Meena (IEC2017031)
Associate Professor Department of ECE,
IIIT Allahabad Vishvajeet Vasantrao Dhawale (IEC2017036)
Satyam Singhal (IEC2017055)
Overview:
• Activation function

• TanH / Hyperbolic Tangent

• Flow chart

• Code

• Result

• Future work

• Reference
What is a Neural Network Activation Function?

• Activation functions are mathematical equations that determine the output of a


neural network.
• The function is attached to each neuron in the network, and determines whether it
should be activated (“fired”) or not, based on whether each neuron’s input is
relevant for the model’s prediction.
• Activation functions also help normalize the output of each neuron to a range
between 1 and 0 or between -1 and 1.
• They must be computationally efficient because they are calculated across thousands
or even millions of neurons for each data sample.
• Modern neural networks use a technique called backpropagation to train the model,
which places an increased computational strain on the activation function, and its
derivative function.
TanH / Hyperbolic Tangent

• Advantages:
The tanh function is just another possible functions that can be used as a nonlinear activation function between layers of a
neural network. It actually shares a few things in common with the sigmoid activation function. They both look very similar.
But while a sigmoid function will map input values to be between 0 and 1, Tanh will map values to be between -1 and 1.

• When will use:

Usually used in hidden layers of a neural network as it’s values


lies between-1 to 1 hence the mean for the hidden layer comes out
be 0 or very close to it, hence helps in centering the data by
bringing mean close to 0. This makes learning for the next layer
much easier.
Flow Chart (TanH Approxiamtion) :

Upto 4th term :

START

Input x

• A = 27 *X B= X **3 C=27 D =X**2 E=9 *D

P= A+B Q=C+E

Out= P/Q
Upto 6th term :

sSTART

Input x

• A=540 *x bB= X**3 cC= X**5 FD=540 F=F= X**2 G=X**4

II = 90*B
E=270*F H=15*G

PP=A + I + C Q=D+E+H

Out= P/Q
TanH approximation
Upto 4th term : Upto 6th term :

module function4(x,out); Module function6(x);

 input signed [15:0] x;


    input signed [15:0] x;    output reg signed [31:0] out;
    output reg signed [31:0] out;       
          reg signed [31:0] a,b,c,d,e,f,g,h,i,numerator,denominator;
   reg signed [31:0] a,b,c,d,e,numerator,denominator;
    
   always@(*)    always@(*)
      begin
   begin   
   a=27*x;    a=540*x;b=x**3;
   b=x**3;    c=x**5;f=x**2;
   c=27;    e=270*f;d=540;
   d=x**2;    g=x**4;h=15*g;
   e=9*d;    i=90*b;
   numerator=a+b;    numerator=a+i+c;
   denominator=c+e;    denominator=d+e+h;
   out=numerator/denominator;    out=numerator/denominator;
   end    end
endmodule   
endmodule
Outpu
t

Upto 4th term : Upto 6th term :


RTL
Schematic
Upto 4th term : Upto 6th term :
Method of Approximation of Tangent
Hyperbolic function:
  ( ex  − e − x  )   𝑥 (27+ 𝑥 2)
𝑇𝑎𝑛h ( 𝑥 )= 𝐹 ( 𝑥 )=
( ex  + e − x  ) (27 +9 𝑥 3)  

X Tanh(x) 4 Terms Error % 6 Terms Error %


1 0.76159 0.7778 0.021 0.764875 0.0430
0.9 0.71629 0.74577 0.039 0.718509 0.00310
0.7 0.60436 0.63957 0.055 0.593787 0.01750
0.6 0.53704 0.56716 0.053 0.537465 0.00079
FUTURE WORK
• We can implement tanh function on FPGA kit .
• FPGA Stands for Field Programmable Gate Array consist of array of logic gates which stores digital logic
specified from Hardware Description Language.
• It helps designer accurately determine device power consumption, taking into account all the meaningful
elements that contribute to the problem, and effectively make the necessary design trade-offs required to
build a reliable system that meets all the performance requirements FOR power consumption AND AREA .

RESULT
Reference:

• Activation Functions: Comparison of Trends in Practice and Research for Deep Learning Chigozie
Enyinna Nwankpa, Winifred Ijomah, Anthony Gachagan, and Stephen Marshall

• https://towardsdatascience.com/complete-guide-of-activation-functions-34076e95d044

• A Gentle Introduction to the Rectified Linear Unit (ReLU)(


https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural
-networks
/)
THANK YOU

You might also like