Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 21

S. Mandayam/ ANN/ECE Dept.

/Rowan University

Artificial Neural Networks


ECE.09.454/ECE.09.560
Fall 2010

Lecture 1
September 13, 2010
Shreekanth Mandayam
ECE Department
Rowan University

http://engineering.rowan.edu/~shreek/fall10/ann/
S. Mandayam/ ANN/ECE Dept./Rowan University

http://www.youtube.com/watch?v=gy5g33S0Gzo

March 17, 2010


S. Mandayam/ ANN/ECE Dept./Rowan University

Plan
• What is artificial intelligence?
• Course introduction
• Historical development – the neuron
model
• The artificial neural network paradigm
• What is knowledge? What is learning?
• The Perceptron
• Widrow-Hoff Learning Rule
• The “Future”….?
S. Mandayam/ ANN/ECE Dept./Rowan University

Artificial Intelligence

Systems that think like humans Systems that think rationally


• Cognitive modeling • Logic

Systems that act like humans


• Natural language processing Systems that act rationally
• Knowledge representation • Decision theoretic agents
• Machine learning
S. Mandayam/ ANN/ECE Dept./Rowan University

Course Introduction
• Why should we take this course?
• PR, Applications
• What are we studying in this course?
• Course objectives/deliverables
• How are we conducting this course?
• Course logistics
• http://engineering.rowan.edu/shreek/fall10/ann/
S. Mandayam/ ANN/ECE Dept./Rowan University

Course Objectives
• At the conclusion of this course the
student will be able to:
• Identify and describe engineering
paradigms for knowledge and learning
• Identify, describe and design artificial
neural network architectures for simple
cognitive tasks
S. Mandayam/ ANN/ECE Dept./Rowan University

Biological Origins
S. Mandayam/ ANN/ECE Dept./Rowan University

Biological Origins
S. Mandayam/ ANN/ECE Dept./Rowan University

History/People
1940’s Turing General problem solver, “Turing test”

1940’s Shannon Information theory

1943 McCulloch and Pitts Math of neural processes


1949 Hebb Learning model

1959 Rosenblatt The “Perceptron”

1960 Widrow LMS training algorithm

1969 Minsky and Papert Perceptron deficiency

1985 Rumelhart Feedforward MLP, backprop

1988 Broomhead and Lowe Radial basis function neural nets

1990’s VLSI implementations

1997 IEEE 1451

2000 Honda Asimo robot


S. Mandayam/ ANN/ECE Dept./Rowan University

Neural Network Paradigm


Stage 1: Network Training
Artificial
Present Examples Neural Indicate Desired Outputs
Network
Determine
Synaptic “knowledge”
Weights
Stage 2: Network Testing
Artificial
New Data Neural Predicted Outputs
Network
S. Mandayam/ ANN/ECE Dept./Rowan University

ANN Model

x Artificial y
Input Neural Output
Network
Vector Vector
 x1  f  y1 
x  Complex y 
 2 Nonlinear  2
 x3   y3 
Function

f(x) = y

“knowledge”
S. Mandayam/ ANN/ECE Dept./Rowan University

Popular I/O Mappings


Single output
Coder
x ANN y y1
x ANN y2
yc

1-out-of-c selector Associator


y1
x ANN y2 x ANN y
yc
S. Mandayam/ ANN/ECE Dept./Rowan University

The Perceptron
x1 wk1 Bias, Activation/
bk squashing
function
x2 wk2

 
Inputs

uk
(.) Output,
yk
Induced
field,
xm wkm vk
Synaptic
weights
S. Mandayam/ ANN/ECE Dept./Rowan University

Activation Functions

Threshold

Sigmoid
S. Mandayam/ ANN/ECE Dept./Rowan University

“Learning”
Mathematical Model of the Learning Process
Intitialize: Iteration (0)
ANN
x [w]0 y(0)
x [w] y
Iteration (1)

x [w]1 y(1)

desired
Iteration (n) o/p

x [w]n y(n) = d
S. Mandayam/ ANN/ECE Dept./Rowan University

“Learning”
Mathematical Model of the Learning Process
Intitialize: Iteration (0)
ANN
x [w]0 y(0)
x [w] y
Iteration (1)

x [w]1 y(1)

desired
Iteration (n) o/p

x [w]n y(n) = d
S. Mandayam/ ANN/ECE Dept./Rowan University

Error-Correction Learning
x1 (n) Activation/ Desired
wk1(n) Bias,
squashing Output,
bk
function dk (n)
x2 wk2(n) +

 
Inputs

Output,
Synaptic (.) yk (n) -
weights
Induced
field,
wkm(n) Error
xm vk(n)
Signal
ek (n)
S. Mandayam/ ANN/ECE Dept./Rowan University

Learning Tasks
• Pattern Association • Function Approximation
• Pattern Recognition • Filtering

Classification
x2 x2
2 2
DB
1 1 DB

x1 x1
S. Mandayam/ ANN/ECE Dept./Rowan University

Perceptron Training
Widrow-Hoff Rule (LMS Algorithm)
w(0) = 0

n=0

y(n) = sgn [wT(n) x(n)]

w(n+1) = w(n) + [d(n) – y(n)]x(n)

n = n+1

Matlab Demo
S. Mandayam/ ANN/ECE Dept./Rowan University

The Age of Spiritual Machines


When Computers Exceed
Human Intelligence
by Ray Kurzweil | Penguin
paperback | 0-14-028202-5 |
S. Mandayam/ ANN/ECE Dept./Rowan University

Summary

You might also like