Deep Learning State of The Art: Amulya Viswambharan ID 202090007 Kehkshan Fatima ID

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 17

Deep Learning State of

the Art

Amulya Viswambharan ID 202090007


Kehkshan Fatima ID
Agenda
• Introduction to Deep learning
What is Deep learning
Why is it useful
• Main Components
• Basic Architectures
Introduction

Source: Google
Machine Learning Vs Deep Learning

Source: Google
Why Deep Learning
Applications of Deep Learning
Applications of Deep Learning
Basic Building Block of Deep Learning -Perceptron
Sigmoid Hyperbolic Tangent ReLU (Rectified Linear Unit)

• Smooth gradient, preventing Zero centered


“jumps” in output values. Computationally efficient—allows the
• Output values bound between 0
Like the Sigmoid function network to converge very quickly
and 1, normalizing the output of Non-linear
each neuron.
Vanishing gradient The Dying ReLU problem
Outputs not zero
centered.
Computationally
expensive
Single Layer and Deep Neural Networks
Neural Network Predictions
There exist several types of architectures for neural networks
• Convolutional Neural Network (CNN)
• Recurrent Neural Networks (RNNs)
• Long Short-Term Memory Networks (LSTMs)
• Stacked Auto-Encoders
• Deep Boltzmann Machine (DBM)
• Deep Belief Networks (DBN)
Recurrent Neural Network (RNN) 
• Deal with sequential data.
• The output from an earlier step is fed as the input to a current
step
• Vanishing Gradient Problem

 Applications
• Sentiment classification
• Image captioning
• Speech recognition
• Natural language processing
• Machine translation
Long Short Term Memory networks
• capable of learning long-term dependencies
• Gates are a way to optionally let information through

 Applications
• Captioning of images and videos
• Language translation and modeling
• Sentiment analysis
• Stock market predictions
Stacked Auto-Encoders

• Feedforward neural networks where the input is the same as the output

 Applications
• Data denoising
• Dimensionality reduction
• Variational Autoencoders (VAE)
Deep Belief Network
• Use probabilities and unsupervised learning to produce outputs.
• Greedy learning algorithms start from the bottom layer and
move up, fine-tuning the generative weights. 

 Applications
• Image recognition
• Video recognition
• Motion-capture data

You might also like