Professional Documents
Culture Documents
Chakrabarti Neuralnetworks 1995
Chakrabarti Neuralnetworks 1995
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms
is collaborating with JSTOR to digitize, preserve and extend access to Current Science
Neural networks
Bikas K. Chakrabarti
Saha Institute of Nuclear Physics, 1/AF Bidhannagar, Calcutta 700 064, India
employ this back propagation algorithm and its networks. These led to quite rigorou
extensions. Some examples are: theories13'14 regarding the network (loading
(i) NETtalk (Sejnowski and Rosenberg7), having a f°r associative memory- Extensions of such n
three-layer feed-forward network (with 80 'hidden' learn,ng' stonng and recalling tempor
units), could 'learn' (like a child) to pronounce written patter"s (rePla<;ing Points by "mi
text. This, of course, is a compromise (solution) to the als° been made\ The, ablllty ,of such
much harder (reverse) problem of speech recognition. ftWOrks t0 assoc,ate 1(3caI minima w,th t
The network was trained to associate a phonetic value patterns has been u,tlhzed by
with a certain group of letters provided by an example t0u SUgfSt a neural-network
set of 1024 English words. After completion of the L0catmg the (defnerate> sol«tlons °f
'training' (typically 50 cycles or iterations), it could Salesman(typeof combinatorial optimiza
accurately pronounce 95% of the words from the the C0St funCtl0n' *be *' f
training set and about 80% of the words from 'unseen' synaptl,C connections This brilliant .dea (of
texts parallel functions of the neurons in the network to
z;;\ Tk. t e l°°k Por tbe associative solution of such NP-hard
(il) The primary structure of a protein consists of a ,, . ,
linear sequence of amino acids, chosen from 20 Priems) has, so far not led to any reasonable success
possibilities, like a text with an alphabet of 20 letters. (comPared t0 that obta!ned usin8 other non-network
The secondary structure determines the local aPPr°ac es).
,. ., j.-/ Hardware implementations of the Hopfield model m
stereochemical form of the amino acid chain (e.g. a ., , . .. , , , . , ...
, „ , „ , . , . the analog VLSI circuits have been achieved, and chips
helix, ß sheet, ß turn or random coil). In an obvious ... ... . c ,, . ,
ii 7 „ .,77. no- i ■ j o l 7 j Wlth (N =) 256 fully connected units or neurons, using
parallel to NETtalk, Sejnowski and Rosenberg' and , . „;,2. . . . < ... , ,
t, , . ,8 j i . • . . ., i.i about 130,000 (= 2N ) resistors to simulate IT/,-s, have
Bohr et al. developed independently neural networks , , i6 TL ,
.... , e . c , ... j-.j been made . These connections (Wu s) are 'jdifficult to
which learned from a set of samples and then predicted v 1
achieve in silicon technology (where units or the
the secondary structure of the central sequence. The best
neurons are simpler to construct, using nonlinear
reported accuracy is 62% for the training set and about
„0/ e t , „ r amplifiers), which is reverse in optical technology.
53% for unseen sequences. One recognizes, therefore, .. . , . ]7 . , , , . b.
,, r. , . , ° . . Mead and coworkers have developed an electronic
the ability of such networks to ext
fhp Ohl IITA/ at cnr»n npru/Arl'c ta pvtrcir-t fhp riinotmnn I 1