Professional Documents
Culture Documents
Neural Networks 2015
Neural Networks 2015
Neural Networks 2015
Tuomas Sandholm
Carnegie Mellon University
Computer Science Department
Parallelism
Graceful degradation
Inductive learning
ANN
(software/hardware,
synchronous/asynchronous)
Notation
ini W j ,i a j
j
Activation Functions
j 1
j 0
ai stept ( W j ,i a j ) step0 ( W j ,i a j )
Where W0,i = t and a0= -1 fixed
W=1
t=1.5
W=1
AND
t=0.5
W= -1
t=-0.5
W=1
OR
g is a step function
NOT
Topologies
Feed-forward vs. recurrent
Hopfield network
Bidirectional symmetric (Wi,j = Wj,i) connections
g is the sign function
All units are both input and output units
Activations are 1
Associative memory
After training on a set of examples, a new stimulus will
cause the network to settle into an activation pattern
corresponding to the example in the training set that most
closely resemble the new stimulus.
E.g. parts of photograph
Thrm. Can reliably store 0.138 #units training examples
Boltzman machine
Symmetric weights
Each output is 0 or 1
Includes units that are neither input units nor output units
Stochastic g, i.e. some probability (as a fn of ini) that g=1
State transitions that resemble simulated annealing.
Approximates the configuration that best meets the training set.
ANN topology
Representation capability vs. overfitting risk.
A feed-forward net with one hidden layer can approximate any
continuous fn of the inputs.
With 2 hidden layers it can approximate any fn at all.
The #units needed in each layer may grow exponentially
Learning the topology
Hill-climbing vs. genetic algorithms vs.
Removing vs. adding (nodes/connections).
Compare candidates via cross-validation.
Perceptrons
O step0 ( W j I j )
j
Representation capability of a
perceptron
Every input can only affect the output in one direction
independent of other inputs.
E.g. unable to represent WillWait in the restaurant example.
Linear separability in 3D
Minority Function
epoch
Err = T-O
W j W j * I j * Err
Variant of perceptron learning rule.
Thrm. Will learn the linearly separable target fn. (if is not too high)
Intuition: gradient descent in a search space with no local optima
Local encoding:
None=0.0, Some=0.5, Full=1.0
Distributed encoding:
None
1
0
Some
0
1
Full
0
0
0
0
1
Majority Function
WillWait
O
)
i i
2 i
1
E ( w ) (Ti g ( W ji a j )) 2
2 i
j
E
1
(Ti g ( Wij g ( Wkj , I k ))) 2
2 i
j
k
E
a j (Ti Oi ) * g ' ( W ji a j )
W ji
j
a j (Ti Oi ) * g ' (ini )
For hidden units we get
E
I k * g ' (in j ) W ji * Erri * g ' (ini )
Wkj
i
I k * g ' (in j ) * Errj
WillWait Problem
Expressiveness of BP
2n/n hidden units needed to represent arbitrary Boolean fns of n
inputs.
(such a network has O(2n) weights, and we need at least
2n bits to represent a Boolean fn)
Thrm. Any continuous fn f:[0,1]nRm
Can be implemented in a 3-layer network with 2n+1 hidden
units. (activation fns take special form) [Kolmogorov]
Efficiency of BP
Using is fast
Training is slow
Epoch takes O ( m | w |)
May need exponentially many epochs in #inputs
More on BP
Generalization:
Good on fns where output varies smoothly with input
Sensitivity to noise:
Very tolerant of noise
Does not give a degree of certainty in the output
Transparency:
Black box
Prior knowledge:
Hard to prime
No convergence guarantees