Alex Lecture2 BrainsAndComputers 10oct2019

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

1

2
Brains and computing (26 Sep and 10 Oct) Testmybrain
Em
Ale

Metaphors, Theories, Connectionist neural networks


Models • What can one or two neurons do?
• Knowledge of brain needed – Simple enough to understand fully – reflex,
for good theories of how it Pavlovian learning
would do something – How connectionist networks are a
• Understand role of metaphor simplification of real neural functioning
in understanding brain • What can several neurons do?
• Understand what – Connectionist network for word recognition
computational modeling is and memory (birds). Understand how they
work.
• Compare to a computer
– Naïve computer-style box-and-arrow
psychological theory. Understand how
network functioning differs.

4
The computer is a misleading metaphor.
Simulating lil’ brains tutorial

How can a bunch of neurons do something smart?


a TUTORIAL
Learning Outcomes
– Know what is a connectionist neural network
• J. Harris: How neurons can mediate conditioning, motor plans, map
memory
• Holcombe: first two lectures
• I. Harris: semantic connectionist memory
– Understand how a single model neuron works
– Understand how connecting these dumb units can
yield somewhat-smart behavior
• How they can learn a new memory
– No new ‘files’ created. Instead a new pattern of activity
• How they can retrieve a learned pattern
– “Content addressable”: if partial content provided, network activates related
• How they can mediate approach behavior
• How they can accomplish XOR (if A or B, do it. But if A and B, don’t do it)

Simulating lil’ brains tutorial:


neural network model = connectionist model
government
Licensed under: work of the US federal

Neurons are pictured as circles.

while synapses or synapse-like elements are modeled


by "weights" or "connections."
synapse The number inside is the neuron’s "activation,” perhaps
its firing rate or voltage potential.

The synaptic terminal is represented by the semicircle.


Its color indicates the connection strength.
Red = excitatory connection
Blue = inhibitory connection

7
Synaptic strength = connection strength = “weight”
Simulating lil’ brains tutorial:

Simplified neural network

Linear activation rule:


simply pass on the overall votes/ stimulation

Linear is not enough for decisions!


You won’t be “71% accepted” to honours
This morning, you didn’t 60% get up and go to class, 40% didn’t
8

Simulating lil’ brains tutorial:

Simplified neural network


Threshold activation rule for D:
If stimulation > threshold, activate

Threshold = 60

Threshold = 60
Brains and computing (26 Sep and 10 Oct) Testmybrain
Em
Ale

Metaphors, Theories, Connectionist neural networks


Models • What can one or two neurons do?
• Knowledge of brain needed – Simple enough to understand fully – reflex,
for good theories of how it Pavlovian learning
would do something – How connectionist networks are a
• Understand role of metaphor simplification of real neural functioning
in understanding brain • What can several neurons do?
• Understand what – Connectionist network for word recognition
computational modeling is and memory (birds). Understand how they
work.
• Compare to a computer
– Naïve computer-style box-and-arrow
psychological theory. Understand how
network functioning differs.

11

Naïve theories
Alert
• Some things attention can do
– Enhance
– Inhibit Interrupt
– Engage
– Disengage
Localize
– Move

Disengage
(Parietal)

Move

Engage

Inhibit
1. Different parts of brains do different
Traditional box-and-arrow psychological
attentional functions
theory of attention
Understanding neural implem Object2
Spatial2
Alert
Less-retinotopic
layers
Interrupt m
od
el Top-down
no
to activation
Localize n
ex
am Spatial1 Object1

Disengage
(Parietal) Lateral inhibition-
competition V1

Move

Engage
2. Functions are distributed? (more
likely for lots of processing
Inhibit units interconnected)
Traditional box-and-arrow
psychological theory
1. Different parts of brains do different attentional functions? 13
(like if wrote conventional software program)

Understanding neural implem


2. Functions are distributed? model not on exam
• Neurons representing
same location mutually
excite each other
• Neurons representing Spatial2
different locations inhibit Object2
each other Less-retinotopic
layers
• Neurons that do
recognition also “do”
attention- via lateral Top-down
inhibition activation
• Lateral inhibition causes Spatial1 Object1
units on all layers
representing a single object
Lateral inhibition-
to eventually all become V1
competition
active
• Posner cuing: cue pre-
activates the location,
helping subsequent target
to win the competition
sooner
14
O’Reilly, R. C., Munakata, Y., Frank, M. J., Hazy, T. E., et al. (2012).
Computational Cognitive Neuroscience. Free: http://ccnbook.colorado.edu
15

Irina Harris’ slide


17

Irina Harris’ slide


Parallel distributed processing (PDP) network

Category
“bird”
emerges
from overlap
among all
Canary instances
Can a canary fly? Fast response.
Does an emu have feathers? Medium fast response.
Can an emu run? Slow response.

Most birds can fly


but not necessarily
run, so you don’t
have a strong
connection from
your bird
knowledge to
running.

19

canary magpie
Can fly Can fly
Has feathers Has feathers
Yellow Black; white

20
22

Brains and computers (26 Sep and 10 Oct) Testmybrain


– Connectionist neural networks Metaphors, Theories, Em
Ale
• What can one or two neurons do?
– Simple enough to understand fully – reflex,
Models
Pavlovian learning • More than one way to skin a cat
– How connectionist networks are a • Knowledge of brain needed for
simplification of real neural functioning
good theories of how it would
• What can several neurons do? do something
– Connectionist network for word recognition
and memory (birds). Understand how they • Understand role of metaphor in
work. understanding brain
• Compare to a computer • Understand what computational
– Naïve computer-style box-and-arrow modeling is
psychological theory. Understand how
network functioning differs.

Why compare to ?

The default way we think of for doing something is how computers do it, because that’s the only
way we know to create something that does something. We don’t fully understand how the brain does things, but we
have a sense of its different style.
To understand something, it helps to have a contrast. If I just listed the ways the brain processes things and creates
behavior, you might nod along but you wouldn’t understand it as well as if I give a contrasting way of doing it.
From next week’s tutorial

• Stores more than one grocery list


using the same set of units

• Because the network is small, interference rapidly sets in

__Interference
+ Generalisation

List 1 List 2
milk milk
+ No interference
tofu chicken __ No generalisation
lentils cheese 27

Network for recognition Computer

Where is the computing?


Where is the memory?

Computation and memory


intertwined almost entirely separate
Memory
Connectionist Computer
Individual concept distributed Memories localised
widely
Content-addressable Time-consuming search
Prone to interference No interference
Generalises naturally. Wisdom? No built-in generalisation
“Soft” capacity limit (interference Once full, it’s full
as memories accumulate)

29

Why is it that our brain never


seems to be full?

Network memory
• Individual memory distributed across thousands or millions of synapses
• Each new memory may degrade many of your old memories
• But also builds on them, strengthening elements in common while degrading others
(interference), so
• You never hit a limit, rather you’re constantly degrading/losing old while making new
• We also make new synapses, which helps
• We make a very few new neurons (too few to help significantly)
computer brain
Hardware -> software Physiology -> psychology

Basic computation unit Transistor neuron


Communicatn Electrical wires Synapses, transmitters,
between units modulators, hormones
speed of messages ~300,000 km per second 7-430 km per hour

# of units 55 million transistors (P4) 80 billion neurons in


neocortex

time for
(2 Ghz: 109) 0.0000000005 sec (~200 Hz) 0.005 sec
single computation
Memory, manipulation
storage integrated
Memory, manipulation separate
Distributed memory

Continually adaptive,
robustness Catastrophic failure from minor injury
Graceful degradation

energy needed 45 watts


20 watts

Memory: superior in computers but works


very differently.

Chess:

Cooking dinner, setting the table:

33
computer vs. brain
• massively parallel
• chess skill
– pattern recognition?
• almost entirely serial

• Deep Blue evaluated between 100 million


and 200 million chess positions per
• Work smarter, not
second. harder!
• During matches with Kasparov, it averaged
126 million positions per second

Computer now wins, but does task in different way than humans.

“the [visual] recognition competence of a two-year-old human child remains

"early AI researchers
made a big mistake: they
thought intelligence was
stuff they found hard to
do" https://www.youtube.com/watch?v=PpuWASNRcu8

Rodney Brooks
• Math
• Chess

35
Moravec’s paradox
The discovery by artificial intelligence and robotics
researchers that high-level reasoning requires very
little computation, but sensorimotor skills require
enormous computational resources.
https://en.wikipedia.org/wiki/Moravec%27s_paradox

Easy to us, but hard for computers


We are all prodigious Olympians in perceptual and motor areas, so good that we make the difficult look
easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet
mastered it. It is not all that intrinsically difficult; it just seems so when we do it. -- Hans Moravec (1988)

"it is comparatively easy to make computers exhibit adult level performance on


intelligence tests or playing checkers, and difficult or impossible to give them the
skills of a one-year-old when it comes to perception and mobility.” - Hans

Moravec (1988)

"In general, we're least aware of what our minds do best… we're more aware of
simple processes that don't work well than of complex ones that work flawlessly.” 36
Marvin Minsky

computer brain
How many robots do you have at home?

September 13, 1959 Chicago Tribune

http://www.paleofuture.com 38
computer brain

Why don’t you have a robot serving your every whim?

A. Just want to pay off my HECS loan first


B. Robot engineers, computer scientists, and neuroscientists not working
hard
C. Vision is hard
D. Controlling movement is hard
E. Common sense is hard
F. C, D & E

39

Rodney Brooks decided to stop


working on things that he found
hard to do (cognition), and build
intelligent machines with:

"No cognition. Just sensing and action.


That is all I would build and completely
leave out what traditionally was
thought of as the intelligence of
artificial intelligence.”

The chasing exercise in tutorial is a bit like that. See what can be done when connecting the perception
units nearly directly to the action units, without worrying about cognition.
computer brain

computer brain
1. Can do millions of arithmetic 1. Can’t even do ten arithmetic
operations in a second operations a second

2. Can easily store and instantly 2. Has trouble memorizing


access thousands of books even one book

3. Can’t even write trashy novels 3. Can write good novels

4. Can barely walk down stairs 4. Can walk, hop, jump

5. Had bad visual recognition, 5. Can recognize objects and


until imitating connectionist people
networks
42

“the [visual] recognition competence of a two-year-old human child remains

• Perception: historically a total fail for computers


– Massively parallel processing used by animals
– Last 20 years, massively parallel networks (“deep
learning in convolutional networks”) have yielded
amazing success in object recognition
– (Still fail at other aspects of perception)
• Movement:
– Requires perception as well as motor aspects
– Oscillations and dynamic interactions among units
activated in parallel are how humans do it.
– Robots today move slowly. Part of slowness is because
of visual uncertainty.
43
https://www.youtube.com/watch?v=8P9g
eWwi9e0 (6min) from 1:00-4:35
http://www.theroboticschallenge.org/overview

State of the Art


http://www.nytimes.com/2015/06/07/science/korean-robot-makers-walk-off-with-2-million-
prize.html
there were numerous falls as robots collapsed through doorways, tumbled
backward off short staircases and keeled over while failing to grasp a http://www.nytimes.com/2009/06/09/scienc
e/09robot.html
valve that they were required to turn.
“In 40 years there has been a lot
of progress, but not progress you
notice,” said Nils Nilsson, a
pioneering artificial intelligence
researcher who was one of
Shakey’s designers. “A lot of the
progress has been made in
removing the cheats we used.”
44

Alex Holcombe’s lectures Testmyb

• Brains and computing (26 Sep and 10 Oct)


– Neural networks
– Metaphors and computational modeling
• Usually massive simplification
• An alternative, brute force approach
– How computers vs. brains do their thing
• Representation of space and parietal lobe
function (14 and 17 Oct)
– Representing space
– Processing of stimuli parietal patient is unaware of

http://m
Room
Quick
and sp
brain i

45

You might also like