Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 11

NEURAL NETWORKS

Tricea Wade (99-002187) Swain Henry (02-006844)

What are neural networks?


Neural networks, also called artificial or simulated neural networks (ANNs or SNNs), are named after the cells in the human brain that perform intelligent operations. The brain is made up of billions of neuron cells. Each of these cells is like a tiny computer with extremely limited capabilities, however, connected together, these cells form the most intelligent system known. Neural networks are formed from hundreds or thousands of simulated neurons connected together in much the same way as the brain's neurons. In other words, neural networks are simulations of central nervous systems, with the main idea being that by simulating the various aspects of the brain, you should be able to replicate some aspects of the brain's capabilities (example, pattern recognition, decision-making, learning, etc.). (The field has expanded so that some applications do not clearly resemble any existing biological counterpart).

Components of a neuron

The neuron model

A simple neuron

Just like people, neural networks learn from experience, not from programming. Neural networks are good at pattern recognition, generalization, and trend prediction. They are fast, tolerant of imperfect data, and do not need formulas or rules. A neural network is usually configured for a specific application, such as pattern recognition or data classification, through a learning process, that is, by training. In this learning process examples are repeatedly presented to the network. Each example includes both inputs (information you would use to make a decision) and outputs (the resulting decision, prediction, or response). Your network tries to learn each of your examples in turn, calculating its output based on the inputs you provided. If the network output doesn't match the target output, the network is corrected by changing its internal connections. This trial-and-error process continues until the network reaches your specified level of accuracy. Once the network is trained and tested, you can give it new input information, and it will produce a prediction. Designing a neural network is largely a matter of identifying which data is input, and what you want to predict, assess, classify, or recognize.

Why use neural networks?


Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an "expert" in the category of information it has been given to analyze. This expert can then be used to provide projections given new situations of interest and answer "what if" questions. Other advantages include: 1. Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience. 2. Self-Organization: An ANN can create its own organization or representation of the information it receives during learning time. 3. Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability. 4. Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage.

Neural networks versus conventional computers


Neural networks take a different approach to problem solving than that of conventional computers. Conventional computers use an algorithmic approach, that is, the computer follows a set of instructions in order to solve a problem. Unless the specific steps that the computer needs to follow are known the computer cannot solve the problem. That restricts the problem solving capability of conventional computers to problems that we already understand and know how to solve. Neural networks process information in a similar way the human brain does. The network is composed of a large number of neurons working in parallel to solve a specific problem. Neural networks learn by example. They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly. The disadvantage is that because the network finds out how to solve the problem by itself, its operation can be unpredictable. On the other hand, conventional computers use a cognitive approach to problem solving; the way the problem is to solved must be known and stated in small unambiguous instructions. These instructions are then converted to a high level language program and then into machine code that the computer can understand. These machines are totally predictable; if anything goes wrong is due to a software or hardware fault. Neural networks and conventional algorithmic computers are not in competition but complement each other. There are tasks are more suited to an algorithmic approach like arithmetic operations and tasks that are more suited to neural networks. Even more, a large number of tasks, require systems that use a combination of the two approaches (normally a conventional computer is used to supervise the neural network) in order to perform at maximum efficiency.

TYPES OF NEURAL NETWORKS


Neural networks simulate aspects of the brain. There are, of course, many different aspects to how a brain works but the most fundamental aspect is definitely that the neuron takes a bunch of incoming electrochemical signals (from other neurons) and "decides" whether or not it will send out its own such signal. It is in the accumulation and "interpretation" of these signals that the brain does all its work However, there are many further details and nuances about how the brain's neural network functions, and neural network simulators necessarily have to pick and choose which of these details they will bother to model and which they will overlook.

Feedforward neural network


The feedforward neural networks are the first and arguably simplest type of artificial neural networks devised. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.

Recurrent network
Contrary to feedforward networks, recurrent neural network (RNNs) are models with bi-directional data flow. While a feedforward network propagates data linearly from input to output, RNNs also propagate data from later processing stages to earlier stages.

Stochastic neural networks


A stochastic neural network differs from a regular neural network in the fact that it introduces random variations into the network.

Modular neural networks


Biological studies showed that the human brain functions not as one single massive network, but as a collection of small networks. This realisation gave birth to the concept of modular neural networks, in which several small networks cooperate or compete to solve problems.

Other types of networks


These special networks do not fit in any of the previous categories.

Instantaneously trained networks


Because rule-based statistical methods and neural network methods (such as the feedforward algorithm or self-organizing maps, the standard techniques for generalization) are notoriously slow, fast pattern learning techniques are becoming increasingly necessary. Standard artificial neural networks can serve as models of biological memory embodied as strongly connected and layered networks of processing units. These feedback (Hopfield with delta learning) and feedforward (backpropagation) networks learn patterns slowly: the network must adjust weights connecting links between input and output layers until it obtains the correct response to the training patterns. But biological learning is not a single process: some forms are very quick and others relatively slow. Short-term biological memory, in particular, works very quickly, so slow neural network models are not plausible candidates in this case. Instantaneously trained neural networks (ITNNs) models working memory in their ability to learn and generalize instantaneously. These networks are almost as good as backpropagation in the quality of their generalization. With their speed advantage, they will work in many real time signal-processing, data-compression, forecasting, and pattern-recognition applications. In these networks the weights of the hidden and the output layers are mapped directly from the training vector data. Ordinarily, they work on binary data but versions for continuous data that require small additional processing are also available.

Spiking neural networks


Spiking (or pulsed) neural networks (SNNs) are neural network simulations that attempt to use a more-accurate "spiking" model of neural output signals. The network input and output are usually represented as series of spikes. SNNs have an advantage of being able to continuously process information. They are often implemented as recurrent networks.Because of this different model of output signals, an SNN is much better suited for applications where the timing of input signals carries important information (example, speech recognition and other signal-processing applications). SNNs can also be applied to the same problems as non-spiking neural networks (as the human brain clearly proves).

Dynamic neural networks


Dynamic neural networks not only deal with nonlinear multivariate behaviour, but also include (learning of) time-dependent behaviour such as various transient phenomena and delay effects.

Cascading neural networks


These neural networks begin their training without any hidden neurons. As the output error reaches a predefined error threshold, the networks add a new hidden neuron. The new hidden neuron is connected to all input nodes, as well as, all previous hidden neurons. Training terminates when a suitable error threshold is reached or when the maximum number of hidden neurons is added.

Real Life Applications:


MEDICINE
Breast cancer cells are traditionally examined under a microscope by a human, who decides the degree of cancer present. People are inconsistent in these judgments from day to day and from person to person. A software called the BrainMaker neural network that classifies breast cancer cells has been developed. Initial comparisons showed that BrainMaker is in good agreement with human observer cancer classifications. Cancer cells are measured with the CAS-100 (Cell Analysis System, Elmhurst, IL). There are 17 inputs to the neural network which represent morphometric features such as density and texture. There are four network outputs representing nuclear grading. The cancerous nucleus is graded as being well, moderate, or poorly differentiated, or as benign. Correct grade assignments were made between 52% and 89% of the time on cases not seen during training. The lower success rate (for well differentiated) may have been due to the smaller percentage of this type in the training set. In addition, heterogeneity is much lower in well-differentiated tumors. Cancerous nuclei were classified within one grade of the correct grade.

CRIME
The Chicago Police Department has used BrainMaker to forecast which officers on the force are potential candidates for misbehavior. The Department's Internal Affairs Division used neural networks to study 200 officers who had been terminated for disciplinary reasons and developed a data base of pattern-like characteristics, behaviors, and demographics found among the 200 police officers. BrainMaker then compared current Department officers against the pattern gleaned from the 200 member control group and produced a list of officers who, by virtue of matching the pattern or sharing questionable characteristics to some degree, were deemed to be "at risk". This particular application has been highly controversial, drawing criticism from several quarters - the most vocal being Chicago's Fraternal Order of Police. The C.P.D. Internal Affairs Division, however, was pleased with the results. After BrainMaker studied the records of the 12,500 current officers (records that included such information as age, education, sex, race, number of traffic accidents, reports of lost weapons or badges, marital status, performance reports, and frequency of sick leaves) the neural network produced a list of 91 at-risk men and women. Of those 91 people, nearly half were found to be already enrolled in a counseling program founded by the personnel department to help officers guilty of misconduct. Currently the Chicago Police Department does not use BrainMaker to forecast problems with officers. The program was apparently terminated due to its controversial nature.

Conclusion
The future of neural networks is not written in stone, however this field of science does appear to have its benefits over conventional computing architecture. Interest has grown in this field and it will continue to be investigated since the use of neural networks are found to be highly interdisciplinary and it is useful among many different fields (statistics, engineering, computational neuroscience and cognitive science (AI)). Neural Networks have also found their way into the medical field, as a way of analyzing and predicting medical problems and also as research material for examining the functions of the brain. However the most exciting thing expected to come from the use of neural networks, is the close modeling of the human consciousness which is considered by some to be impossible.

References
Amygdala: What are SNN? http://amygdala.sourceforge.net/understand_snn.php Last Updated: Feb 28, 2003. Artificial Neural Network Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Artificial_neural_network Last Updated: Nov 9, 2005. BrainMaker Neural Networks and Police Misconduct
http://www.calsci.com/Police.html Last Updated: Unknown

BrainMaker Neural Networks Improve Hospital Treatment and Reduce Expenses


http://www.calsci.com/hospital.html Last Updated: Unknown

Neural Network Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Neural_networks Last Updated: Nov 8, 2005. Neural Networks - Imperial College London http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html Last Updated: Unknown What Are Neural Networks
http://www.calsci.com/whatare.html Last Updated: Unknown

NEuroNet: Roadmap: Introduction to Future Prospectives of Neural Networks www.kcl.ac.uk/neuronet/about/roadmap/futurepintro.html Last Updated: March 13, 2001.

You might also like