Professional Documents
Culture Documents
8 CS1AC16 Weightless Neural Nets
8 CS1AC16 Weightless Neural Nets
There is an easier way to teach a neural network logic problems than using McCulloch and Pitts neuron.
It's by use of a memory device.
A memory has a series of locations, each of which has a unique address. In the simplest model of
neuron you can write 1 or 0 as a value.
Example: OR problem:
You only need 2 address lines and 4 location. Write 0 into location 00
and 1 into locations 01, 10, 11.
And XOR:
The single layer of neurons can solve non linear problems – which McCulloch Pitts neurons can't do.
In these examples a memory with n address lines is a neuron with n inputs. Training it involves writing
1 in locations provided by the examples in the training set. Then the network just reads the values from
the cells.
This is the basis of weightless neural nets, also called n-tuple or RAM based.
They are often used for image recognition:
o Get image
o Convert to black and white
o Get a set of 1s and 0s in a rectangular grid
o Sample n bits and write them in the locations to learn
Simple but impractical way:
o Start with a clear RAM
o Convert an image to binary form and learn it
o When presented an image it is compared with the learnt one and recognized if the
addressed location has 1 in it
o But for 256x256 image you need that many RAM locations
o The system can only recognize the exact image – if 1 pixel is different it won't be
recognized.
Practical way:
o Have many small RAM neurons
o Each neuron is connected to only part of the image and analyzes that part
o If the system is shown the same image it has been taught, you get 100% recognition
o If it's slightly different it will also be recognized but at a lower percentage
Such a collection of RAM neurons is called a discriminator.
If you want to recognize 2 people, you need 2 discriminators. If you only use one, it will recognize both
people but won't be able to say which one it is.