Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Neurocomputing

Taxonomy of NNHW

Neurocomputing
Prof. Dr.-Ing. Andreas König
Institute of Integrated Sensor Systems ISE

Dept. of Electrical Engineering and Information Technology


Technische Universität Kaiserslautern

Fall Semester 2006

© Andreas König Slide 6-1

Neurocomputing
Taxonomy of NNHW

Course Contents:
1. Introduction
2. Rehearsal of Artificial Neural Network models relevant for
implementation and analysis of the required computational steps
3. Analysis of typical ANN-applications with regard to computational
requirements
4. Aspects of simulation of ANNs and systems
5. Efficient VLSI-implementation by simplification of the original
algorithms
6. Derivation of a taxonomy of neural hardware
7. Digital neural network hardware
8. Analog and mixed-signal neural network hardware
9. Principles of optical neural network hardware implementation
10. Evolvable hardware overview
11. Summary and Outlook

© Andreas König Slide 6-2

1
Neurocomputing
Chapter Contents Taxonomy of NNHW

6. Derivation of a taxonomy of neural hardware


6.1 Criteria for classification of neural hardware
6.2 Brief survey of VLSI design styles and techniques to implement
neural network hardware
6.3 Dedicated performance measures for neural network hardware
6.4 Taxonomy of neural network hardware and presentation
selection
6.5 Summary

© Andreas König Slide 6-3

Neurocomputing
Criteria for Classification Taxonomy of NNHW

¾ A plethora of implementation activities can be observed for roughly two


decades due to widespread interest in ANN
¾ The research was driven by a diversity of motivations (s. chapter 1)

Taxonomy required for survey and


potential choice for application

¾ Suiting criteria for NN-HW taxonomy:


• Implementation Technology
• Biological Evidence
• Cascadability of Design
• Mapping of network onto hardware
• Embedding of design/architecture
• Dedication of design/architecture
• Performance parameters

© Andreas König Slide 6-4

2
Neurocomputing
Criteria for Classification Taxonomy of NNHW
¾ Implementation Technology

• digital
+ Mature, widely available technology
+ Short development time at reasonable cost
+ Arbitrary accuracy, high SNR
+ Uncomplicated weight storage
- Potentially large, power-greedy, speed-limitations

• Analog
+ Several orders of magnitude faster that digital designs
+ Exploits physics of circuits/devices for compact NN
implementation
+ Real world is analog
- Low SNR/accuracy (6-8 bit without special design
considerations)
- Tricky to design, time-consuming, cascading difficult,
tolerances and drift
© Andreas König Slide 6-5

Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

¾ Short introduction to common VLSI-design approaches applicable to


NN-HW (extracted from design training materials) :

© Andreas König Slide 6-6

3
Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

© Andreas König Slide 6-7

Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

© Andreas König Slide 6-8

4
Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

© Andreas König Slide 6-9

Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

© Andreas König Slide 6-10

5
Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

© Andreas König Slide 6-11

Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

© Andreas König Slide 6-12

6
Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

© Andreas König Slide 6-13

Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

© Andreas König Slide 6-14

7
Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

© Andreas König Slide 6-15

Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

© Andreas König Slide 6-16

8
Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

© Andreas König Slide 6-17

Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

¾ The brief introduction to basic design issues serves to understand the


main concepts and features of following NN-HW
¾ The survey focuses on the technological and design aspects of the time,
the discussed NN-HW architectures were conceived !
© Andreas König Slide 6-18

9
Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

• optical NN-HW implementation:

+ Exploits physics of circuits for compact NN implementation


+ No interconnectivity problem, explots third dimension
+ Extremely high processing power (time/space bandwidth)
- Not yet well established technology, optoelectronics most promising

© Andreas König Slide 6-19

Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

¾ Basic ANN HW building blocks:

© Andreas König Slide 6-20

10
Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

¾ Mapping of network onto hardware :

• The complete network is mapped onto a single (complex/fast) physical


PE. Loss of inherent parallelism, high flexibility
• Each neuron is mapped on aphysical PE, synapses are time-multiplexed
• Each synapse/neuron is mapped on a physical PE

¾ In addition to neuron and synapse parallelism, other parallel options can


be and have been considered, e.g., network parallelism for different
parallel copies of a neural network on parallel computers
© Andreas König Slide 6-21

Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

¾ Biological Evidence:

• Designs closely mimicking biological structures, e.g., silicon retina or


cochlea chips, (neuromorphic designs)
• Design performing low-level vision tasks, e.g., CNNs
• Design abstracting strongly from biological structures (BP, BM)

¾ Cascadability of design:

• No cascadability, single chip solution


• Limited cascadability, e.g., extendible in layer or neuron dimension
• Full cascadability, e.g., extendible in layer, neuron, and/or synapse
dimension
x2M Consequences for signal routing on
chip/pads/pins/board:
xM+1 • potential bandwidth limitation
xM • (analog) signal integrity
• connectivity limitations
x1

o1 oM oM+1 o2M
© Andreas König Slide 6-22

11
Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

¾ Embedding of design:

• Physical appearance/requirements (Sever, desktop, board, chip, MEMS)


• Software interface to application (programming language, libs, real-time)
• Hardware interface to application (Bus-Ifc., protocol, conventions)

¾ Dedication of design:

• General-Purpose computer: NN mapping on versatile massively


parallel computers
• Special-Purpose neurocomputer: Implementation of specialized
computer architectures from off-the-shelf components
• Dedicated Neural Network Hardware: Designing novel chips &
systems from device level upwards in (leading edge) microelectronics
technology

© Andreas König Slide 6-23

Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

¾ Performance parameters:

• Network type(s) supported


• Computational speed
• Computation accuracy
• Weights fixed/programmable/trainable
• Architecture with/without on-chip learning/on-chip memory

¾ Measures of computation speed:

• Forward or recall phase: Connections Per Second (CPS)


• Backward or learning phase: Connection Updates Per Second (CUPS)

• However, this counting of raw computation speed


is not objective for comparison.
• It neglects computation accuracy and algorithmic
performance
© Andreas König Slide 6-24

12
Neurocomputing
VLSI-Design styles in Brief Taxonomy of NNHW

¾ Improved performance measures:


• Inclusion of accuracy in assessment (Keulen 94)

Connection Primitives Per Second (CPPS):

CPPS = bi × bw × CPS
Includes implementation expense

• Inclusion of efficiency in assessment (Cornu & Ienne 94)

Actual speed-up of neurocomputer vs. reference machine:


τ ref τ ref ⋅ k ref (E0 )
S M ( E0 ) = AM (E0 ) =
τ nnhw τ nnhw ⋅ k nnhw (E0 )
Here, AM(E0) denotes the algorithmic efficiency and E0 a convergence metric

Includes network dependence (implicit implementation expense)


© Andreas König Slide 6-25

Neurocomputing
Taxonomy of NNHW Taxonomy of NNHW

¾ Ordered presentation of key activities and preselection for presentation:


Technology
optical digital
analog
Biological Biological
none evidence evidence high
NESPINN
Biological SPIKE128
none ParSpike
• Associative
low evidence
Dedication NeuroPipe/MASPINN
memory none Dedicated
Kyuma´s GP
• Boltzmann designs
machine Artificial high DSP/Transp. SP
Retina low CM2/MasPar
• Hopfield,
Backprop ICSI RAP Flexibility
HANNIBAL
Mead´s Silicon NeuroClassifier ETH MUSIC
Retina/cochlea, of Masa, Intel´s Fuj. SANDY/8 ARIANE
Single
Vision-chips ETANN Chip Various NN ARAMYS
Accuracy NN NBS1220
CNNs of Chua CNAPS PAN
Harrer, Espejo fixed float
SYNAPSE-X Ni1000
ANNA/NET32K SPERT/SAND NEURO4 ZISC036
3D ANN Silicon LNeuro-X SNAP HAMS
Neuron Seeker KOHSIP

© Andreas König Slide 6-26

13
Neurocomputing
Summary Taxonomy of NNHW

¾ The presented taxonomy serves to give an ordered survey of the


plethora of implementation activities

¾ Criteria were derived for this purpose and discussed

¾ Representative implementations and systems will be selected from the


taxonomy in the following chapters

¾ The criteria employed for the taxonomy also serve as a kind of shopping
guide for the selection of available NN-HW with regard to application
needs and specified requirements

In the next step, most relevant and instructive architectures and


implementations of NN-HW will presented with technology as the
key criterion in the order digital, analog, and optical implementations

© Andreas König Slide 6-27

14

You might also like