Neural Net 2 - Final Version

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

L A

U S
Week 1 to 3

Neural Network and Deep Learning

BY MIKOKIT -[ F I L ]- SLIDE 1
Logistic Regression

-[ First page ]-
Loss and Cost
Content Functions

Forward &Backward
Propagation

Ways of doing
optimization

Activation
Functions
E aB nY g MS IoKv Oa Kn InT -[ F I L ]- SLIDE 2
-[ SLIDE TITLE HERE ]-

Logistic Regression

E aB nY g MS IoKv Oa Kn InT -[ F I L ]- SLIDE 3


Loss
Function
Loss Function
& Cost
C o st
Function Function

E aB nY g MS IoKv Oa Kn InT -[ F I L ]- SLIDE 4


E aB nY g MS IoKv Oa Kn InT -[ F I L ]- SLIDE 5
-[ SLIDE TITLE HERE ]-
Forward Propagation
x1

w1

x2 z = w1x1 + w2x2 + b 𝑎 = 𝜎 𝑧 = 𝑦ො

w2

E aB nY g MS IoKv Oa Kn InT -[ F I L ]- SLIDE 6


-[ SLIDE TITLE HERE ]-
Backward Propagation
x1

w1

x2 z = w1x1 + w2x2 + b 𝑎 = 𝜎 𝑧 = 𝑦ො

w2

E aB nY g MS IoKv Oa Kn InT -[ F I L ]- SLIDE 7


-[ Second page ]-

Vectorization
Ways of
Need a full
doing dimension size

optimization
Broadcasting
Duplicate to
the full
dimension size

E aB nY g MS IoKv Oa Kn InT -[ F I L ]- SLIDE 8


Vectorization
Non-vectorization

E aB nY g MS IoKv Oa Kn InT -[ F I L ]- SLIDE 9


Principle in doing operation

Vectorization Broadcasting

+ +
𝑚, 𝑛 − 𝑚, 𝑛 ↝ (𝑚, 𝑛) 𝑚, 𝑛 − 1, 𝑛 ↝ (𝑚, 𝑛)
× ×
/ /

E aB nY g MS IoKv Oa Kn InT -[ F I L ]- SLIDE 10


Vectorization

1 2 3 0 3 5 1 5 8
3 1 4 + 0 3 5 = 3 4 9
0 5 7 0 3 5 0 8 12

Broadcasting
1 2 3 1 5 8
3 1 4 + 0 3 5 = 3 4 9
0 5 7 0 8 12

E aB nY g MS IoKv Oa Kn InT -[ F I L ]- SLIDE 11


Activation Functions

E aB nY g MS IoKv Oa Kn InT -[ F I L ]- SLIDE 12


x

z2 = w2a1 a2 = g2(z2)
w z1 = w1 x + a1 = g1(z1) ℒ a, y = ℒ yො , y
+ b2 = yො
b1

Relu or Leaky Relu Sigmoid or Tanh

b
b

E aB nY g MS IoKv Oa Kn InT -[ F I L ]- SLIDE 13


Thanks for your attentive
participation!

The end

BY MIKOKIT -[ F I L ]- SLIDE 14

You might also like