Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

20IS712

Deep Learning
300 (3)
Dr. Lekshmi R. R.
Asst. Prof.
Department of Electrical & Electronics
Engineering
Amrita School of Engineering 1
VGG 16
Neural Network
• VGG – by Simonyan and Zisserman
• Includes 16 convolutional layers
• Complex than LeNet
• Preferred cos of uniform architecture
• Involves huge number of parameters
– 138 million parameters
Input 3х3 Conv, 64
Convolution 1-1
Convolution 2-1 3х3 Conv, 64
Layers Pooling

Convolution 2-1
Pool 1/2
3х3 Conv, 64
Convolution 2-2 3х3 Conv, 64
Pool 1/2
Pooling
3х3 Conv, 64
Convolution 3-1
Convolution 3-2 3х3 Conv, 64
Convolution 3-3 3х3 Conv, 64
Pooling Pool 1/2

Convolution 4-1
3х3 Conv, 64
Convolution 4-2 3х3 Conv, 64
Convolution 4-3 3х3 Conv, 64
Pooling Pool 1/2
3х3 Conv, 64
Convolution 5-1

Convolution 5-2 3х3 Conv, 64


Convolution 5-3 3х3 Conv, 64
Pool 1/2
Pooling
fc, 64
Dense layer
Dense layer fc, 64
Dense layer
fc, 64
Output
Architecture
Layer 1, 2
• Convolution layers ReLu
• Input: 224x224x3 226 224

224
• Filter: 64
– Size: 3x3
• Need output:224x224 226

• So, Padding: 1
• Stride: 1
Layer 1, 2

• Convolution layers (ReLu)


• Input: 224x224x3
• Filter: 64
– Size: 3x3
• Need output:224x224
• So, Padding: 1
• Stride: 1
• Output:224x224x64
Max pooling

• Input:224x224x64
• Size: 2x2
• Stride: 2
• Output:112x112x64
Layer 3, 4

Convolution layers (ReLu) Max Pooling


• Input: 112x112x64 • Input:112x112x128
• Filter: 128 • Size: 2x2
– Size: 3x3
• Stride: 2
• Padding: 1 • Output:56x56x128
• Stride: 1
• Output:112x112x128
Layer 5, 6, 7

Convolution layers (ReLu) Max Pooling


• Input:56x56x256
• Input: 56x56x128
• Size: 2x2
• Filter: 256
– Size: 3x3 • Stride: 2
• Padding: 1 • Output:28x28x256
• Stride: 1
• Output:56x56x256
Layer 8, 9, 10

Convolution layers (ReLu) Max Pooling


• Input:28x28x512
• Input: 28x28x256
• Size: 2x2
• Filter: 512
– Size: 3x3 • Stride: 2
• Padding: 1 • Output:14x14x512
• Stride: 1
• Output:28x28x512
Layer 11, 12, 13

Convolution layers (ReLu) Max Pooling


• Input: 14x14x512
• Input: 14x14x512
• Size: 2x2
• Filter: 512
– Size: 3x3 • Stride: 2
• Padding: 1 • Output: 7x7x512
• Stride: 1
• Output: 14x14x512
Layer 14, 15, 16

Fully connected (ReLu) C14 Fully connected (Softmax) C16


• Input:4096
• Input: 7x7x512 (flatten)
• Neurons: 1000
• Neurons: 4096

Fully connected (ReLu) C15

• Input:4096
• Neurons: 4096
16 Layers require learning
2

3
3
3 1 1 1
Thank you

You might also like