Professional Documents
Culture Documents
Neural Networks: Eric Postma Ikat Universiteit Maastricht
Neural Networks: Eric Postma Ikat Universiteit Maastricht
Neural Networks: Eric Postma Ikat Universiteit Maastricht
Overview
Introduction: The biology of neural networks
Perceptron
Multilayer perceptron
Kohonens self-organising feature map Examples of applications
A typical AI agent
Supervised learning
curve fitting, surface fitting, ...
Unsupervised learning
clustering, visualisation...
An input-output function
(Artificial) neural networks The digital computer versus the neural computer
Graceful Degradation
performance
damage
Adaptivitiy
processing implies learning in biological computers versus processing does not imply learning in digital computers
Context-sensitivity: patterns
emergent properties
Neural activity
out
in
Connectivity
An example:
The visual system is a feedforward hierarchy of neural modules Every module is (to a certain extent) responsible for a certain function
Neurons
activity nonlinear input-output function
Connections
weight
Learning
supervised unsupervised
Artificial Neurons
i1 i2 i3 e a = f(e)
Input-output function
nonlinear function:
f(x) =
1 + e -x/a
a0
f(e)
a
wAB
The weight of the connection from neuron A to neuron B
wAB
The Perceptron
Global error E
is a function of the differences between the desired and actual outputs
Gradient Descent
Rosenblatt (1959)
Minsky & Papert (1961) Rumelhart & McClelland (1986)
input
hidden
output
supervised learning
each training pattern: input + desired output in each epoch: present all patterns at each presentation: adapt weights after many epochs convergence to a local minimum
input: frequencies
hidden representation
Preventing Overfitting
GENERALISATION = performance on test set
Early stopping Training, Test, and Validation set k-fold cross validation
leaving-one-out procedure
Hidden Representations
Other Applications
Practical OCR financial time series fraud detection process control marketing speech recognition Theoretical cognitive modeling biological modeling
Some mathematics
Perceptron
MLP
Sigmoid function
Neurons do not take the weighted sum of their inputs (as in the perceptron), but measure the similarity of the weight vector to the input vector
The activation of the neuron is a measure of similarity. The more similar the weight is to the input, the higher the activation Neurons represent prototypes
Course Coding
output
input (n-dimensional)
Competitive learning
Determine the winner (the neuron of which the weight vector has the smallest distance to the input vector) Move the weight vector w of the winning neuron towards the input i
i w
i w
Before learning
After learning
Kohonens idea
Impose a topological order onto the competitive neurons (e.g., rectangular map) Let neighbours of the winner share the prize (The postcode lottery principle.)
After learning, neurons with similar weights tend to cluster on the map
Hexagonal
Winner (red) Nearest neighbours
A simple example
visualisation
input
2D input
weights
Another example
Dimension reduction
Adaptive resolution
Application of SOFM
Examples (input)
Projections of data
pca1
pca2
Taylor, Micolich, and Jonas (1999). Fractal Analysis of Pollocks drip paintings. Nature, 399, 422. (3 june).
Fractal dimension
}
Creation date
Vincent Van Gogh paints Van Gogh Claude-Emile Schuffenecker paints Van Gogh
Sunflowers
Is it made by
Van Gogh? Schuffenecker?
Approach
Select appropriate features (skipped here, but very important!) Apply neural networks
van Gogh
Schuffenecker
Training Data
Results
Resultats, cont.
A major caveat