Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 70

Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

ABSTRACT

The use of error-correction codes (ECCs) with advanced correction capability is a


common system-level strategy to harden the memory against multiple bit upsets (MBUs).
Therefore, the construction of ECCs with advanced error correction and low redundancy
has become an important problem, especially for adjacent ECCs. Existing codes for
mitigating MBUs mainly focus on the correction of up to 3-bit burst errors. As the
technology scales and cell interval distance decrease, the number of affected bits can
easily extend to more than 3 bit. The previous methods are therefore not enough to satisfy
the reliability requirement of the applications in harsh environments. In this paper, a
technique to extend 3-bit burst error-correction (BEC) codes with quadruple adjacent
error correction (QAEC) is presented. First, the design rules are specified and then a
searching algorithm is developed to find the codes that comply with those rules. The H
matrices of the 3-bit BEC with QAEC obtained are presented. They do not require
additional parity check bits compared with a 3-bit BEC code. By applying the new
algorithm to previous 3-bit BEC codes, the performance of 3-bit BEC is also remarkably
improved. Theencodingand decodingprocedure of the proposed codes is illustrated with
an example. Then, theencodersand decodersare implementedusinga65-nm library and the
results show that our codes have moderate total area and delay overhead to achieve the
correction ability extension.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

CHAPTER-1

INTRODUCTION

RELIABILITY is an important requirement for space applications [1]. Memories


as the data storing components play a significant role in the electronic systems. They are
widely used in the system on a chip and application-specific integrated circuits [2], [3]. In
these applications, memories

Account for a large portion of the circuit area [4]. This makes memories suffer more
space radiation than other components. Therefore, the sensitivity to radiation of memories
has become a critical issue to ensure the reliability of electronic systems. In modern static
random access memories (SRAMs), radiation-induced soft errors in the form of the single
event upset (SEU) and multiple bit upset (MBU) are two prominent single event effects
[5]. As semiconductor technology develops from the submicrometer technology to the
ultradeep submicrometer (UDSM) technology, the size of memory cells is smaller and
more cells are included in the radius affected by a particle [6], [7] as shown in Fig. 1.
When a particle from a cosmic ray hits the basic memory cell, it generates a radial
distribution of electron–hole pairs along the transport track [8]. These generated electron–
hole pairs can cause soft errors by changing the values stored in the memory cell leading
to data corruption and system failure [9]. For the transistors with a large feature size, a
radiation event just affects one memory cell, which means that only the SEU occurs. In
this case, the use of single error-correction (SEC)double error-detection (DED) codes [10]
is enough to protect the memory from radiation effects.

As the feature size enters into DSM range, the critical chargekeeps decreasingand
the area of the memorycell scales down for each successive technology node. This makes

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

more memory cells affected by a particle hit as shown in Fig. 2. For the CMOS bulk
technology, with the cell-to-cell spacing decreasing, the electron–hole pairs generated in
the substrate can diffuse to nearby cells and induce MBUs [11]–[14]. This compares with
the FDSOI technology, which isolates transistors and limits the multicollection
mechanism. Therefore, the multicollection mechanism is more prominent for a bulk
technology, and the MBU probability is higher [15]–[18]. To protect against MBUs,
ECCs that correct adjacent bit errors [19]–[24] or multiple bit errors [25]–[27] are
utilized. Although multiple bit error-correction codes (ECCs) can correct multiple bit
errors in any error patterns not limited to the adjacent bits, the complexity of the decoding
process and the limitation of the code block size limit their use. Meanwhile, from the
generation principle of MBUs in [28], the type of the MBUs depends on the initial angle
of incidence and scattering angle of the secondary particles. Based on this, adjacent bit
errors are dominant error patterns among the multiple bit errors. Therefore, adjacent bits
correction ECCs become popular in memory-hardened designs. Many codes are
proposed, and the capability of adjacent bits correction mainly focuses on the double
adjacent error correction (DAEC) [19]–[22], triple adjacent error correction (TAEC), and
3-bit burst errorcorrection (BEC) [23]. An alternative to codes that can correct adjacent
errors is to use an SEC or SEC-DED code combined with an interleaving of the memory
cells. Interleaving ensures that cells that belong to the same logical word are placed
physically apart. This means that an error on multiple adjacent cells affects multiple
words each having only one bit error that can be corrected by an SEC code. As noted in
previous studies, interleaving makes the interconnections and routing of the memory
more complex and it will lead to an increase area and power consumption or limitations
in the aspect ratio [19], [20]. Therefore, whether it is better to use SEC plus interleaving
or a code that can correct adjacent errors will be design-dependent and both alternatives
are of interest. As the technology comes to the UDSM, the area of the memory cell keeps
further decreasing and even memories having atomic-dimension transistors appear. The
ionization range of ions with the order of magnitude in micrometer can include
morememorycells in the word-linedirectionas shown in Fig. 2 than the three bits
previously considered [29]. This means that the SEC-DAEC-TAEC codes may not be
effective to ensure the memory reliability. Codes with more advanced correction ability
are demanded. For example, codes designed with low redundancyfor SEC-DAEC-TAEC
and 3-bit BEC are presented in [23]. Therefore, extending the correction ability to
quadrupleadjacent bit errors would be of interest, especially if it can be done without
adding extra parity bits. In this paper, we present an improvement of 3-bit BEC codes to

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

also provide quadruple adjacent error correction (QAEC). The code design technique for
the QAEC with

low redundancy is specified from two aspects: 1) error space satisfiability; and 2)
uniquesyndromesatisfiability. Codes with QAEC for 16, 32, and 64 data bits are
presented. From the view of the integrated circuits design, two criteria have been used to
optimize the decoder complexity and decoder delay at the ECCs level: 1) minimizing the
total number of ones in the parity check matrix and 2) minimizing the number of ones in
the heaviest row of the parity check matrix. Additionally, based on the traditional
recursive backtracing algorithm, an algorithm with the function of weight restriction and
recording the past procedure is developed. The new algorithm not only reduces the cost of
program run time, but also remarkably improves the performance of previous 3-bit BEC
codes. The encoders and decoders for the QAEC codes are implementedin Verilog
hardware description language(HDL). Area overhead and delay overhead are obtained by
using a TSMC bulk 65-nm library. Compared with the previous 3-bit BEC codes, the area
and delay over head is moderate to achieve the correction ability extension.

Single bit errors are least likely type of errors in serial data transmission. To see
why, imagine a sender sends data at 10 Mbps. This means that each bit lasts only for 0.1
μs (micro-second). For a single bit error to occur noise must have duration of only 0.1 μs
(micro-second), which is very rare. However, a single-bit error can happen if we are
having a parallel data transmission. For example, if 16 wires are used to send all 16 bits
of a word at the same time and one of the wires is noisy, one bit is corrupted in each
word.

1.1 BURST ERROR:

It means 2 or more errors are occurred in data. The term burst error means that
two or more bits in the data unit have changed from 0 to 1 or vice-versa. Note that burst
error doesn’t necessary means that error occurs in consecutive bits. The length of the
burst error is measured from the first corrupted bit to the last corrupted bit. Some bits in
between may not be corrupted.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

Burst errors are mostly likely to happen in serial transmission. The duration of the
noise is normally longer than the duration of a single bit, which means that the noise
affects data; it affects a set of bits as shown in Fig. The number of bits affected depends
on the data rate and duration of noise.

Error Detecting Codes Basic approach used for error detection is the use of
redundancy, where additional bits are added to facilitate detection and correction of
errors. Popular techniques are:

1) Simple Parity check


2) Two-dimensional Parity check
3) Checksum
4) Cyclic redundancy

1.2 SIMPLE PARITY CHECKING OR ONE-DIMENSION PARITY CHECK:

The most common and least expensive mechanism for error- detection is the
simple parity check. In this technique, a redundant bit called parity bit, is appended to
every data unit so that the number of 1s in the unit (including the parity becomes even).
Blocks of data from the source are subjected to a check bit or Parity bit generator form,
where a parity of 1 is added to the block if it contains an odd number of 1’s (ON bits) and
0 is added if it contains an even number of 1’s. At the receiving end the parity bit is
computed from the received data bits and compared with the received parity bit, as shown
in Fig. This scheme makes the total number of 1’s even, that is why it is called even
parity checking.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

1.3 CHECKSUM:

In checksum error detection scheme, the data is divided into k segments each of m
bits. In the sender’s end the segments are added using 1’s complement arithmetic to get
the sum. The sum is complemented to get the checksum.

1.4 CYCLIC REDUNDANCY CHECKS (CRC):

This Cyclic Redundancy Check is the most powerful and easy to implement
technique. Unlike checksum scheme, which is based on addition, CRC is based on binary
division. In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are
appended to the end of data unit so that the resulting data unit becomes exactly divisible
by a second, predetermined binary number. At the destination, the incoming data unit is
divided by the same number. If at this step there is no remainder, the data unit is assumed
to be correct and is therefore accepted.

1.5 BINARY LINEAR BLOCK CODE (BLBC):

This set of 2k code words is called a block code. For a block code to be useful
there should be a one-to-one correspondence between a message u and its code word v. A
desirable structure for a block code to possess is the linearity. With this structure, the
encoding complexity will be greatly reduced.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

CHAPTER2

LITERATURE SURVEY

2.1 Nanoscale memory design for efficient computation: trends, challenges and
opportunity

Abstract:

The importance of embedded memory in contemporary multi-core processors and


system-on-chip (SoC) for wearable electronics and IoT applications is growing. Intensive
data processing in such processors and SoCs necessitates larger on-chip, energy-efficient
static random access memory (SRAM). However, there are several challenges associated
with low-voltage SRAMs. In this paper we have discussed the major hurdles at
technology front for low power and robust SRAM design. We have discussed the
different circuit techniques to mitigate these severe reliability concerns for embedded
SRAM. Furthermore, this paper presented the recent development in embedded memory
technology targeted to efficient computation. We have analyzed the effect of various
device sizing on memory stability. The novel current mode circuit is also presented for
high speed, offset tolerant SRAM sense amplifier. The Fin FET memory design is
explored to reduce the external manifestation of device mismatch on SRAM stability and
memory yield.

Authors:S. K. Vishvakarma, B. S. Reniwal, V. Sharma, C. B. Khuswah

2.2 A Real Time EDAC System for Applications Onboard Earth Observation Small
Satellites
Abstract:
Onboard error detection and correction (EDAC) devices aim to secure data
transmitted between the central processing unit (CPU) of a satellite onboard computer
(OBC) and its local memory. A follow-up is presented here of some low-complexity
EDAC techniques for application in random access memories (RAMs) onboard the
Algerian microsatellite Alsat-1. The application of a double-bit EDAC method is
described and implemented in field programmable gate array (FPGA) technology. The
performance of the implemented EDAC method is measured and compared with three
different EDAC strategies, using the same FPGA technology. A statistical analysis of
single-event upset (SEU) and multiple-bit upset (MBU) activity in commercial memories

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

onboard Alsat-1 is given. A new architecture of an onboard EDAC device for future Earth
observation small satellite missions in low Earth orbits (LEO) is described.
Authors: Y. Bentoutou

2.3 Impact of Scaling on Neutron-Induced Soft Error in SRAMs from a 250 nm to a


22 nm Design Rule
Abstract:
Trends in terrestrial neutron-induced soft-error in SRAMs from a 250 nm to a 22
nm process are reviewed and predicted using the Monte-Carlo simulator CORIMS, which
is validated to have less than 20% variations from experimental soft-error data on 180-
130 nm SRAMs in a wide variety of neutron fields like field tests at low and high
altitudes and accelerator tests in LANSCE, TSL, and CYRIC. The following results are
obtained: 1) Soft-error rates per device in SRAMs will increase x6-7 from 130 nm to 22
nm process; 2) As SRAM is scaled down to a smaller size, soft-error rate is dominated
more significantly by low-energy neutrons (<; 10 MeV); and 3) The area affected by one
nuclear reaction spreads over 1 M bits and bit multiplicity of multi-cell upset become as
high as 100 bits and more.
Authors:E. Ibe, H. Taniguchi, Y. Yahagi, K.-I. Shimbo, and T. Toba

2.4 Modeling Single Event Transients in Advanced Devices and ICs

Abstract:

The ability for Single Event Transients (SETs) to induce soft errors in Integrated
Circuits (ICs) was predicted for the first time by Wallmark and Marcus in the early 60's
and was confirmed to be a serious issue thirty years later. In the 90's microelectronic
technologies reached the “deep submicron” era, allowing high density ICs working at
frequencies faster than hundreds of MHz. This new paradigm changed the status of SETs
to become a major source of reliability losses. Huge efforts have thus been made to
characterize SETs in microelectronics, either using experiments or by simulation, in order
to reveal key factors leading to SET occurrence, propagation and capture in modern ICs.
In this context, modeling and simulation are of primary importance to get accurate SET
predictions. This paper focuses on modeling SETs in innovative electronic devices which
involves modeling steps at different scales, from ionizing particle to circuit response.
After a brief review of the state-of-the art of modeling at each scale, this paper will
discuss current capabilities and intrinsic limitations of SET modeling, the incoming

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

challenges in advanced devices and ICs, and finally the methodologies to improve SET
simulation and prediction for future technologies.

Authors:L. Artola, M. Gaillardin, G. Hubert, M. Raine, and P. Paillet

2.5 Electron and Ion Tracks in Silicon: Spatial and Temporal Evolution

Abstract:

Calculations based on our Monte Carlo code for the transport of electrons and ions
in silicon are presented. The code follows the trajectories of all secondary electrons down
to a very low cut-off energy (1.5 eV). The spatial and temporal distributions of the
deposited energy around ion tracks in silicon are also calculated. The prompt electrical
fields generated by the charges at the early stage of the track evolution are evaluated,
taking into account carrier recombination. Using the statistics of the energy deposition
events in small sensitive volumes, we find that the SEE cross sections in modern devices
depend on the ion energy, in addition to its LET.

Authors: M. Murat, A. Akkerman, and J. Barak

2.6 Physics of Multiple-Node Charge Collection and Impacts on Single-Event


Characterization and Soft Error Rate Prediction

Abstract:

Physical mechanisms of single-event effects that result in multiple-node charge


collection or charge sharing are reviewed and summarized. A historical overview of
observed circuit responses is given that concentrates mainly on memory circuits. Memory
devices with single-node upset mechanisms are shown to exhibit multiple cell upsets, and
spatially redundant logic latches are shown to upset when charge is collected on multiple
circuit nodes in the latch. Impacts on characterizing these effects in models and ground-
based testing are presented. The impact of multiple-node charge collection on soft error
rate prediction is also presented and shows that full circuit prediction is not yet well
understood. Finally, gaps in research and potential future impacts are identified.

Authors:J. D. Black, P. E. Dodd, and K. M. Warren

2.7 Characterizing SRAM Single Event Upset in Terms of Single and Multiple Node
Charge Collection

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

Abstract:

A well-collapse source-injection mode for SRAM SEU is demonstrated through


TCAD modeling. The recovery of the SRAM's state is shown to be based upon the
resistive path from the p+ -sources in the SRAM to the well. Multiple cell upset patterns
for direct charge collection and the well-collapse source-injection mechanisms are
predicted and compared to SRAM test data.

Authors:J. D. Black

2.8 Multiple-Bit Upset in 130 nm CMOS Technology

Abstract:

The probability of proton-induced multiple-bit upset (MBU) has increased in


highly-scaled technologies because device dimensions are small relative to particle event
track size. Both proton-induced single event upset (SEU) and MBU responses have been
shown to vary with angle and energy for certain technologies. This work analyzes SEU
and MBU in a 130 nm CMOS SRAM in which the single-event response shows a strong
dependence on the angle of proton incidence. Current proton testing methods do not
account for device orientation relative to the proton beam and, subsequently, error rate
prediction assumes no angular dependencies. Proton-induced MBU is expected to
increase as integrated circuits continue to scale into the deep sub-micron regime.
Consequently, the application of current testing methods will lead to an incorrect
prediction of error rates.

Authors:A. D. Tipton

CHAPTER3

PROJECT RELATED WORK


[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

Error Detecting Codes Basic approach used for error detection is the use of
redundancy, where additional bits are added to facilitate detection and correction of
errors. Popular techniques are:

1) Simple Parity check


2) Two-dimensional Parity check
3) Checksum
4) Cyclic redundancy

3.1 SIMPLE PARITY CHECKING OR ONE-DIMENSION PARITY CHECK:

The most common and least expensive mechanism for error- detection is the
simple parity check. In this technique, a redundant bit called parity bit, is appended to
every data unit so that the number of 1s in the unit (including the parity becomes even).
Blocks of data from the source are subjected to a check bit or Parity bit generator form,
where a parity of 1 is added to the block if it contains an odd number of 1’s (ON bits) and
0 is added if it contains an even number of 1’s. At the receiving end the parity bit is
computed from the received data bits and compared with the received parity bit, as shown
in Fig. This scheme makes the total number of 1’s even, that is why it is called even
parity checking.

Fig 3.1.1: Even-parity checking scheme

Table 3.1.1: Possible 4-bit data words and corresponding code words

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

Note that for the sake of simplicity, we are discussing here the even-parity
checking, where the number of 1’s should be an even number. It is also possible to use
odd-parity checking, where the number of 1’s should be odd.

Two-dimension Parity Check Performance can be improved by using two-


dimensional parity check, which organizes the block of bits in the form of a table. Parity
check bits are calculated for each row, which is equivalent to a simple parity check bit.
Parity check bits are also calculated for all columns then both are sent along with the data.
At the receiving end these are compared with the parity bits calculated on the received
data. This is illustrated in Fig.3.1.2

Fig 3.1.2: Two-dimension Parity Checking

3.2 CHECKSUM:

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

In checksum error detection scheme, the data is divided into k segments each of m
bits. In the sender’s end the segments are added using 1’s complement arithmetic to get
the sum. The sum is complemented to get the checksum. The checksum segment is sent
along with the data segments as shown in Fig. 3.2.1 (a). At the receiver’s end, all received
segments are added using 1’s complement arithmetic to get the sum. The sum is
complemented. If the result is zero, the received data is accepted; otherwise discarded,

Fig 3.2.1: (a) Sender’s end for the calculation of the checksum, (b) Receiving end for
checking the checksum

3.3 CYCLIC REDUNDANCY CHECKS (CRC):

This Cyclic Redundancy Check is the most powerful and easy to implement
technique. Unlike checksum scheme, which is based on addition, CRC is based on binary
division. In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are
appended to the end of data unit so that the resulting data unit becomes exactly divisible
by a second, predetermined binary number. At the destination, the incoming data unit is
divided by the same number. If at this step there is no remainder, the data unit is assumed
to be correct and is therefore accepted.

If a k bit message is to be transmitted, the transmitter generates an r-bit sequence,


known as Frame Check Sequence (FCS) so that the (k+r) bits are actually being
transmitted. Now this r-bit FCS is generated by dividing the original number, appended
by r zeros, by a predetermined number. This number, which is (r+1) bit in length, can also
be considered as the coefficients of a polynomial, called Generator Polynomial. The

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

remainder of this division process generates the r-bit FCS. On receiving the packet, the
receiver divides the (k+r) bit frame by the same predetermined number and if it produces
no remainder, it can be assumed that no error has occurred during the transmission.
Operations at both the sender and receiver end are shown in Fig.

Fig 3.3.1: Basic scheme for Cyclic Redundancy Checking

This mathematical operation performed is illustrated in Fig. 3.3.2 4- bit number by


the coefficient of the generator polynomial x3 +x+1, which is 1011, using the modulo-2
arithmetic. Modulo-2 arithmetic is a binary addition process without any carry over,
which is just the Exclusive-OR operation. Consider the case where k=1101. Hence, we
have to divide 1101000 (i.e. k appended by 3 zeros) by 1011, which produces the
remainder r=001, so that the bit frame (k+r) =1101001 is actually being transmitted
through the communication channel. At the receiving end, if the received number, i.e.,
1101001 is divided by the same generator polynomial 1011 to get the remainder as 000; it
can be assumed that the data is free of errors.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

Fig 3.3.2: Cyclic Redundancy Checks (CRC) example

The transmitter can generate the CRC by using a feedback shift register circuit.
The same circuit can also be used at the receiving end to check whether any error has
occurred. All the values can be expressed as polynomials of a dummy variable X. For
example, for P = 11001 the corresponding polynomial is X4+X3+1. A polynomial is
selected to have at least the following properties:

 It should not be divisible by X.


 It should not be divisible by (X+1).

The first condition guarantees that all burst errors of a length equal to the degree of
polynomial are detected. The second condition guarantees that all burst errors affecting an
odd number of bits are detected. CRC process can be expressed as X^nM(X)/P(X) = Q(X)
+ R(X) / P(X)

Commonly used divisor polynomials are:

• CRC-16 = X16 + X15 + X2 + 1

• CRC-CCITT = X16 + X12 + X5 + 1

• CRC-32 = X32 + X26 + X23 + X22 + X16 + X12 + X11 + X10 + X8 + X7 + X5 + X4


+ X2 + 1

When a particle from a cosmic ray hits the basic memory cell, it generates a radial
distribution of electron–hole pairs along the transport track. These generated electron–
hole pairs can cause soft errors by changing the values stored in the memory cell leading
to data corruption and system failure. For the transistors with a large feature size, a
radiation event just affects one memory cell, which means that only the SEU occurs. In
this case, the use of single error-correction (SEC) - double error-detection (DED) codes is
enough to protect the memory from radiation effects. As the feature size enters into DSM
range, the critical charge keeps decreasing and the area of the memory cell scales down
for each successive technology node. This makes more memory cells affected by a
particle hit as shown in Fig. 2. For the CMOS bulk technology, with the cell-to-cell
spacing decreasing, the electron–hole pairs generated in the substrate can diffuse to
nearby cells and induce MBUs.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

FIG 3.3.3: Schematic description of memory cells included in the radiation effect with
variation of the technology node.

Therefore, the multi collection mechanism is more prominent for a bulk


technology, and the MBU probability is higher. To protect against MBUs, ECCs that
correct adjacent bit errors or multiple bits are utilized. Although multiple bit error-
correction codes (ECCs) can correct multiple bit errors in any error patterns not limited to
the adjacent bits, the complexity of the decoding process and the limitation of the code
block size limit their use. Meanwhile, from the generation principle of MBUs in, the type
of the MBUs depends on the initial angle of incidence and scattering angle of the
secondary particles.

3.4 Binary Linear Block Code (BLBC):

This set of 2k code words is called a block code. For a block code to be useful
there should be a one-to-one correspondence between a message u and its code word v. A
desirable structure for a block code to possess is the linearity. With this structure, the
encoding complexity will be greatly reduced.

1. An (n, k) binary linear block code is a k-dimensional subspace of the n-dimensional


vector space V n = { ( c 0, c 1, . . . , c n − 1 )|∀ c j c j ∈ GF(2 ) }; n is called the length of
the code, k the dimension. 2. Example: a (6, 3) code

C = {000000, 100110, 010101, 001011, 110011, 101101, 011110, 111000}

3.5 Generator Matrix:

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

1. An (n, k) BLBC can be specified by any set of k linear independent code words c 0,c
1,. . . , c k − 1. If we arrange the k code words into a k × n matrix G, G is called a
generator matrix for C.

2. Let u = (u 0, u 1, u k − 1), where u j ∈ GF (2). c = (c 0, c 1, c n − 1) = uG.

3. The generator matrix G′ of a systematic code has the form of [Ik A], where Ik is the k ×
k identity matrix. 4. G′ can be obtained by permuting the columns of G and by doing
some row operations on G. We say that the code generated by G′ is an equivalent code of
the generated by G.

Therefore, if one error can be corrected or detected, it obeys the following rules.

1) Correctable Restriction: The corresponding syndrome vector is unique in the set of


the syndromes.

2) Detectable Restriction: The corresponding syndrome vector is nonzero.

3.6 CODE DESIGN TECHNIQUE:

We discuss the code design technique for 3-bit BEC with QAEC. The approach
used is based on syndrome decoding and the analysis of the requirements in terms of
parity check bits and the formulation of the problem is similar to the one used in some
previous studies, the process used to design QAEC codes can be divided into error space
satisfiability problem and unique syndrome satisfiability.

Error Space Satisfiability

For a code with k data bits and c check bits, its input can represent 2k binary values. If
one error occurs, it has k +c bit positions for a single error with output space of (k + c) ·
2k values. If adjacent errors occur, it has k + c − 1 bit positions for double adjacent errors
with output space of (k +c −1)· 2k values, (k + c − 2) bit positions for triple adjacent
errors with output space of (k + c − 2) · 2k values,...,(k + c − (N − 1)) bit positions for N
adjacent errors with output space of (k + c −(N −1))· 2k values. If almost adjacent errors
occur, it has (k +c−2) bit positions for errors in 3-bit window with output space of (k + c
− 2) · 2k values, (k + c − 3) bit positions for each type of errors in 4-bit window with
output space of (k + c − 3) · 2k values. To obtain the codes that can correct the errors with

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

certain fault types, the sum of the output space value of error patterns should be less than
or equal to the whole output space value 2k+c.

For the proposed code, to correct 3-bit burst and quadruple adjacent errors, the
total condition of the error patterns is (k + c − 3) + (k + c − 2) + (k + c − 1) + (k + c) + (k
+ c − 2), respectively, for quadruple adjacent errors, triple adjacent errors, double
adjacent errors, single errors, and 3-bit almost adjacent errors.

Unique Syndrome Satisfiability

From the view of binary block linear codes, if a type of error patterns can be
corrected, the syndrome of individual error patterns should be unique. For the proposed
code, the error patterns are (. . . , 1, . . .) for SEC, (. . . , 11, . . .) for DAEC, (. . . , 111, . . .)
for TAEC, (. . . , 1111, . . .) for QAEC, and (. . . , 101, . . .) for 3-bit almost adjacent errors
correction. Therefore, the unique syndrome satisfiability can be expressed by

S0i = S0 j,

S1i = S1 j,

S2i = S2 j,

S3i = S3 j,

S4i = S4 j,

S0i = S1 j = S2k = S3l = S4m

Where S0i is the syndrome for single bit error, S1i is the syndrome for double
adjacent bit errors, S2i is the syndrome for triple adjacent bit errors, S3i is the syndrome
for 3-bit almost adjacent bit errors, and S4i is the syndrome for quadruple adjacent bit
errors. The syndrome variables S0i, S1i, S2i, S3i, and S4i are linear combinations of the
H matrix columns obeying the rules

S0i = hi

S1i = hi ⊕ hi−1

S2i = hi ⊕ hi−1 ⊕ hi−2 (12) S3i = hi ⊕ hi−2

S4i = hi ⊕ hi−1 ⊕ hi−2 ⊕ hi−3

3.7 SEARCHING ALGORITHMS AND TOOL DEVELOPMENT:

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

An algorithm is proposed to solve the Boolean satisfiability problem based on the


discussion in the former section. Based on the algorithm, a code optimization tool is
developed to obtain the target H matrix with custom optimization restrictions. The
introduction of the algorithm is divided into two subsections. In first section, the basic
part of the algorithm is introduced to find the solutions meeting the requirement of the
Boolean satisfiability. In second Section, based on the basic part, the method with column
weight restriction is designed to force the optimization process to use as few ones as
possible, thus optimizing the total number of ones in the matrix and the number of ones in
the heaviest row.

3.7.1 Basic Part of Code Design Algorithm:

In order to construct the expected codes, the first step is to ensure the number of
the check bits. From the aspect of low redundancy, the number of the check bits is set to
the values shown in Table I. The number of the check bits is seven for codes with 16 data
bits, eight for codes with 32 data bits, and nine for codes with 64 data bits.

The main idea of the algorithm is based on the recursive back tracing algorithm.
At first, an identity matrix with block size (n − k) is constructed as the initial matrix, and
the corresponding syndromes of the error patterns are added to the syndrome set. Then, a
column vector selected from Fig.

The 2n−k − 1 column candidates is added to the right side of the initial matrix.
This process is defined as column added action. Meanwhile, the new syndromes which
belong to the new added column are calculated. If none of new syndromes is equal to the
elements in a syndrome set, the column-added action is successful and the corresponding
new syndromes are added into the syndrome set. The base-matrix is updated to the
previous matrix with the added column. Otherwise, the column-added action fails and
another new column from the candidates is selected. If all the candidates are tried and the
column-added action still fails, one column from the right side of previous matrix and the
corresponding syndromes are

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

Fig 3.3.4: Flow of code design algorithm.

Reduced from the base-matrix and the syndrome set, respectively. Then, the
algorithm continues the column added action until the matrix dimension reaches the
expected value.

3.7.2 Optimization Part of Code Design Algorithm:

For both optimization criteria mentioned before, the column used to construct the
matrix should have as low weight as possible. In this case, the weight of the used column
is restricted so as to minimize the weight of the matrix and speed up the process of
finding better solutions. The method of column weight restriction for 3-bit BEC codes is
shown in Fig.,

Fig 3.3.5: Restriction of the column weight

Where, Ai is the available minimum weight of the column. Through the control of
the weight of each column, the computing operation can quickly cover the whole cycle
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

list by decreasing the number of available candidates of the columns added to the base-
matrix. Although this exhausted control can quickly narrow the searching scope, it is
unwise to use it for all the bits as it can lead to missing better solutions due to harsh
restriction. With the proposed column weight restriction method, the searching algorithm
with more optimization criteria for the columns can find the solutions more effectively.
The performance of the new finding process is remarkably effective for codes with small
block size. When the number of data bits reaches 32 or 64, the computing time is still
huge amount. Here, a function of recording the past procedure is also developed with the
column weight restriction method to shorten the time cost of searching the target codes.
The detailed algorithm flow is shown in Fig,

Fig 3.3.6:Flow of algorithm with column weight restriction and past procedure
record.

At the initialization step, all the column weight restrictions are set to (n − k).
Then, A0 is set to 2, which means that the number of 1 in the corresponding column is 2.
The searching algorithm with column weight restriction starts to find the solution. If the
solution exists, record and update the status of the bit restricted and shift to the next bit
column weight setting. Otherwise, releases the column weight restriction to Ai = Ai + 1.
With the program constraints increasing, the target code is close to appear.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

CHAPTER4

PROPOSED SYSTEM

In terms of computing time, it is possible for QAEC codes with 16 data bits to
have access to all the solutions, but it is impossible for QAEC codes with 32 and 64 data

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

bits. Therefore, in this paper, the best solutions are presented for QAEC codes with 16
data bits and the best solutions found in a reasonable time (one week) using the proposed
searching algorithm are presented for QAEC codes with 32 and 64 data bits. In terms of
the two optimization criteria mentioned in previous, for codes (23, 16), the two best parity
check matrices with both criteria best optimized. The parity check matrix for (40, 32)
optimized to reduce the total number of one’s is shown in Fig. 8 and the parity check
matrices for (40, 32) optimized to reduce the maximum number of ones in a row, for
codes (73, 64), the solutions obtained with both criteria better optimized are the same.

4.1 PROCEDURE OF ENCODING AND DECODING FOR QAEC CODES:

The fundamental theory of encoding and decoding were discussed in

Fig 4.4.1:Best parity check matrix for the (23, 16) 3-bit burst with QAEC optimized to
reduce the total number of ones and the maximum number of ones in a row.

Fig 4.4.2:Best parity check matrix for the (23, 16) 3-bit burst with QAEC optimized to
reduce the total number of ones and the maximum number of ones in a row.

Fig 4.4.3:Parity check matrix for the (40, 32) 3-bit burst with QAEC optimized to reduce
the total number of ones.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

Fig 4.4.4:Parity check matrix for the (40, 32) 3-bit burst with QAEC optimized to reduce
the maximum number of ones in a row.

Fig 4.4.5:Parity check matrix for the (73, 64) 3-bit burst with QAEC optimized to reduce
the total number of ones and the maximum number of ones in a row.

When the particles hit the memory resulting in MBUs, the contents of affected
memory cells are flipped. Here, to elaborate on the correction ability of QAEC codes,
quadruple adjacent bits are flipped on D2, D3, D4, and D5. In the decoding process, the
syndrome is calculated using the stored check bits and data bits and the structure of the
parity check matrix. Through the corresponding relationship between the syndrome and
the XOR result of the columns mentioned in Section II, the flipped bits can be located.
With the flipped bits inverted, the errors from the storage stage in the memory are
effectively corrected. This is the whole procedure of encoding and decoding for the
proposed QAEC codes.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

Fig 4.4.6: parity check matrix and input

Fig 4.4.7: Encoding process of quadruple adjacent error correction (QAEC)

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

Fig 4.4.8: Decoding process of quadruple adjacent error correction (QAEC)

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

CHAPTER 5

HARDWARE REQUIREMENTS:

5.1 GENERAL
Integrated circuit (IC) technology is the enabling technology for a whole host of
innovative devices and systems that have changed the way we live. Jack Kilby and Robert
Noyce received the 2000 Nobel Prize in Physics for their invention of the integrated
circuit; without the integrated circuit, neither transistors nor computers would be as
important as they are today. VLSI systems are much smaller and consume less power
than the discrete components used to build electronic systems before the 1960s.
Integration allows us to build systems with many more transistors, allowing much
more computing power to be applied to solving a problem. Integrated circuits are also
much easier to design and manufacture and are more reliable than discrete systems; that
makes it possible to develop special-purpose systems that are more efficient than general-
purpose computers for the task at hand.

5.2 APPLICATIONS OF VLSI

Electronic systems now perform a wide variety of tasks in daily life. Electronic
systems in some cases have replaced mechanisms that operated mechanically,
hydraulically, or by other means; electr
onics are usually smaller, more flexible, and easier to service. In other cases, electronic
systems have created totally new applications. Electronic systems perform a variety of
tasks, some of them visible, some more hidden:
 Personal entertainment systems such as portable MP3 players and DVD players perform
sophisticated algorithms with remarkably little energy.
 Electronic systems in cars operate stereo systems and displays; they also control fuel
injection systems, adjust suspensions to varying terrain, and perform the control functions
required for anti-lock braking (ABS) systems.
 Digital electronics compress and decompress video, even at high definition data rates, on-
the-fly in consumer electronics.
 Low-cost terminals for Web browsing still require sophisticated electronics, despite their
dedicated function.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

 Personal computers and workstations provide word-processing, financial analysis, and


games. Computers include both central processing units (CPUs) and special-purpose
hardware for disk access, faster screen display, etc.

Medical electronic systems measure bodily functions and perform complex


processing algorithms to warn about unusual conditions. The availability of these
complex systems, far from overwhelming consumers, only creates demand for even more
complex systems. The growing sophistication of applications continually pushes the
design and manufacturing of integrated circuits and electronic systems to new levels of
complexity.
And perhaps the most amazing characteristic of this collection of systems is its
variety as systems becomcompares with the FDSOI technology, which isolates transistors
and limits the multicollection mechanism. Therefore, the multicollection mechanism is
more prominent for a bulk technology, and the MBU probability is higher [15]–[18]. To
protect against MBUs, ECCs that correct adjacent bit errors [19]–[24] or multiple bit
errors [25]–[27] are utilized. Although multiple bit error-correction codes (ECCs) can
correct multiple bit errors in any error patterns not limited to the adjacent bits, the
complexity of the decoding process and the limitation of the code block size limit their
use. Meanwhile, from the generation principle of MBUs in [28], the type of the MBUs
depends on the initial angle of incidence and scattering angle of the secondary particles.
Based on this, adjacent bit errors are dominant error patterns among the multiple bit
errors. Therefore, adjacent bits correction ECCs become popular in memory-hardened
designs. Many codes are proposed, and the capability of adjacent bits correction mainly
focuses on the double adjacent error correction (DAEC) [19]–[22], triple adjacent error
correction (TAEC), and 3-bit burst errorcorrection (BEC) [23]. An alternative to codes
that can correct adjacent errors is to use an SEC or SEC-DED code combined with an
interleaving of the memory cells. Interleaving ensures that cells that belong to the same
logical word are placed physically apart. This means that an error on multiple adjacent
cells affects multiple words each having only one bit error that can be corrected by an
SEC code. As noted in previous studies, interleaving makes the interconnections and
routing of the memory more complex and it will lead to an increase area and power
consumption or limitations in the aspect ratio [19], [20]. Therefore, whether it is better to
use SEC plus interleaving or a code that can correct adjacent errors will be design-
dependent and both alternatives are of interest. As the technology comes to the UDSM,
the area of the memory cell keeps further decreasing and even memories having atomic-

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

dimension transistors appear. The ionization range of ions with the order of magnitude in
micrometer can include morememorycells in the word-linedirectionas shown in Fig. 2
than the three bits previously considered [29]. This means that the SEC-DAEC-TAEC
codes may not be effective to ensure the memory reliability. Codes with more advanced
correction ability are demanded. For example, codes designed with low redundancyfor
SEC-DAEC-TAEC and 3-bit BEC are presented in [23]. Therefore, extending the
correction ability to quadrupleadjacent bit errors would be of interest, especially if it can
be done without adding extra parity bits. In this paper, we present an improvement of 3-
bit BEC codes to also provide quadruple adjacent error correction (QAEC). The code
design technique for the QAEC with
low redundancy is specified from two aspects: 1) error space satisfiability; and 2)
uniquesyndromesatisfiability. Codes with QAEC for 16, 32, and 64 data bits are
presented. From the view of the integrated circuits design, two criteria have been used to
optimize the decoder complexity and decoder delay at the ECCs level: 1) minimizing the
total number of ones in the parity check matrix and 2) minimizing the number of ones in
the heaviest row of the parity check matrix. Additionally, based on the traditional
recursive backtracing algorithm, an algorithm with the function of weight restriction and
recording the past procedure is developed. The new algorithm not only reduces the cost of
program run time, but also remarkably improves the performance of previous 3-bit BEC
codes. The encoders and decoders for the QAEC codes are implementedin Verilog
hardware description language(HDL). Area overhead and delay overhead are obtained by
using a TSMC bulk 65-nm library. Compared with the previous 3-bit BEC codes, the area
and delay overheadis moderateto achieve the correction ability extension.e more complex,
we build not a few general-purpose computers but an ever-wider range of special-purpose
systems. Our ability to do so is a testament to our growing mastery of both integrated
circuit manufacturing and design, but the increasing demands of customers continue to
test the limits of design and manufacturing.

5.3 ADVANTAGES OF VLSI:


While we will concentrate on integrated circuits in this book, the properties of
integrated circuits what we can and cannot efficiently put in an integrated circuit largely
determine the architecture of the entire system. Integrated circuits improve system
characteristics in several critical ways. ICs have three key advantages over digital circuits
built from discrete components:

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

 Size: Integrated circuits are much smaller both transistors and wires are shrunk to
micrometer sizes, compared to the millimeter or centimeter scales of discrete
components. Small size leads to advantages in speed and power consumption, since
smaller components have smaller parasitic resistances, capacitances, and inductances.
 Speed: Signals can be switched between logic 0 and logic 1 much quicker within a chip
than they can between chips. Communication within a chip can occur hundreds of times
faster than communication between chips on a printed circuit board.
The high speed of circuits on-chip is due to their small size smaller components and wires
have smaller parasitic capacitances to slow down the signal.
 Power consumption: Logic operations within a chip also take much less power. Once
again, lower power consumption is largely due to the small size of circuits on the chip
smaller parasitic capacitances and resistances require less power to drive them.

5.4 VLSI AND SYSTEMS


These advantages of integrated circuits translate into advantages at the system level:
 Smaller physical size: Smallness is often an advantage in itself—consider portable
televisions or handheld cellular telephones.
 Lower power consumption: Replacing a handful of standard parts with a single chip
reduces total power consumption. Reducing power consumption has a ripple effect on the
rest of the system: a smaller, cheaper power supply can be used; since less power
consumption means less heat, a fan may no longer be necessary; a simpler cabinet with
less shielding for electromagnetic shielding may be feasible, too.
 Reduced cost: Reducing the number of components, the power supply requirements,
cabinet costs, and so on, will inevitably reduce system cost. The ripple effect of
integration is such that the cost of a system built from custom ICs can be less, even
though the individual ICs cost more than the standard parts they replace. Understanding
why integrated circuit technology has such profound influence on the design of digital
systems requires understanding both the technology of IC manufacturing and the
economics of ICs and digital systems.

5.5 INTEGRATED CIRCUIT MANUFACTURING

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

Integrated circuit technology is based on our ability to manufacture huge numbers


of very small devices—today, more transistors are manufactured in California each year
than raindrops fall on the state. In this section, we briefly survey VLSI manufacturing.

5.6TECHNOLOGY
Most manufacturing processes are fairly tightly coupled to the item they are
manufacturing. An assembly line built to produce Buicks, for example, would have to
undergo moderate reorganization to build Chevys tools like sheet metal molds would
have to be replaced, and even some machines would have to be modified. And either
assembly line would be far removed from what is required to produce electric drills.

5.7 MASK-DRIVEN MANUFACTURING


Integrated circuit manufacturing technology, on the other hand, is remarkably
versatile. While there are several manufacturing processes for different circuit types
CMOS, bipolar, etc. A manufacturing line can make any circuit of that type simply by
changing a few basic tools called masks. For example, a single CMOS manufacturing
plant can make both microprocessors and microwave oven controllers by changing the
masks that form the patterns of wires and transistors on the chips.
Silicon wafers are the raw material of IC manufacturing. The fabrication process
forms patterns on the wafer that create wires and transistors. a series of identical chips are
patterned onto the wafer (with some space reserved for test circuit structures which allow
manufacturing to measure the results of the manufacturing process).
The IC manufacturing process is efficient because we can produce many identical chips
by processing a single wafer. By changing the masks that determine what patterns are laid
down on the chip, we determine the digital circuit that will be created.
The IC fabrication line is a generic manufacturing line—we can quickly retool the
line to make large quantities of a new kind of chip, using the same processing steps used
for the line’s previous product.

5.8 CIRCUITS AND LAYOUTS


We could build a breadboard circuit out of standard parts. To build it on an IC
fabrication line, we must go one step further and design the layout, or patterns on the
masks. The rectangular shapes in the layout (shown here as a sketch called a stick
diagram) form transistors and wires which conform to the circuit in the schematic.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

Creating layouts is very time-consuming and very important the size of the layout
determines the cost to manufacture the circuit, and the shapes of elements in the layout
determine the speed of the circuit as well. During manufacturing, a photolithographic
(photographic printing) process is used to transfer the layout patterns from the masks to
the wafer. The patterns left by the mask are used to selectively change the wafer:
impurities are added at selected locations in the wafer; insulating and conducting
materials are added on top of the wafer as well.
These fabrication steps require high temperatures, small amounts of highly toxic
chemicals, and extremely clean environments. At the end of processing, the wafer is
divided into a number of chips.

5.9 MANUFACTURING DEFECTS


Because no manufacturing process is perfect, some of the chips on the wafer may
not work. Since at least one defect is almost sure to occur on each wafer, wafers are cut
into smaller, working chips; the largest chip that can be reasonably manufactured today is
1.5 to 2 cm on a side, while a wafer is in moving from 30 to 45 cm. Each chip is
individually tested; the ones that pass the test are saved after the wafer is diced into chips.
The working chips are placed in the packages familiar to digital designers.
In some packages, tiny wires connect the chip to the package’s pins while the
package body protects the chip from handling and the elements; in others, solder bumps
directly connect the chip to the package. Integrated circuit manufacturing is a powerful
technology for two reasons: all circuits can be made out of a few types of transistors and
wires; and any combination of wires and transistors can be built on a single fabrication
line just by changing the masks that determine the pattern of components on the chip.
Integrated circuits run very fast because the circuits are very small.
Just as important, we are not stuck building a few standard chip types—we can build any
function we want. The flexibility given by IC manufacturing lets we build faster, more
complex digital systems in ever greater variety.

5.10ECONOMICS
Because integrated circuit manufacturing has so much leveraged a great number of
parts can be built with a few standard manufacturing procedures a great deal of effort has
gone into improving IC manufacturing. However, as chips become more complex, the
cost of designing a chip goes up and becomes a major part of the overall cost of the chip.
 Moore’s Law

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

In the 1960s Gordon Moore predicted that the number of transistors that could be
manufactured on a chip would grow exponentially. His prediction, now known as
Moore’s Law, was remarkably prescient. Moore’s ultimate prediction was that transistor
count would double every two years, an estimate that has held up remarkably well.
Today, an industry group maintains the International Technology Roadmap for
Semiconductors (ITRS), that maps out strategies to maintain the pace of Moore’s Law.
 Terminology
The most basic parameter associated with a manufacturing process is the minimum
channel length of a transistor. (In this book, for example, we will use as an example a
technology that can manufacture 180 nm transistors.) A manufacturing technology at a
particular channel length is called a technology node. We often refer to a family of
technologies at similar feature sizes: micron, submicron, deep submicron, and now
nanometer technologies. The term nanometer technology is generally used for
technologies below 100 nm.

5.11 COST OF MANUFACTURING


IC manufacturing plants are extremely expensive. A single plant costs as much as
$4 billion. Given that a new, state-of-the-art manufacturing process is developed every
three years, that is a sizeable investment. The investment makes sense because a single
plant can manufacture so many chips and can easily be switched to manufacture different
types of chips.
In the early years of the integrated circuits business, companies focused on
building large quantities of a few standard parts. These parts are commodities one 80 ns,
256Mb dynamic RAM is more or less the same as any other, regardless of the
manufacturer.
Companies concentrated on commodity parts in part because manufacturing
processes were less well understood and manufacturing variations are easier to keep track
of when the same part is being fabricated day after day.
Standard parts also made sense because designing integrated circuits was hard not only
the circuit, but the layout had to be designed, and there were few computer programs to
help automate the design process.
5.12 COST OF DESIGN

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

 One of the less fortunate consequences of Moore’s Law is that the time and
money required to design a chip goes up steadily. The cost of designing a chip
comes from several factors:
 Skilled designers are required to specify, architect, and implement the chip. A
design team may range from a half-dozen people for a very small chip to 500
people for a large, high-performance microprocessor.
 These designers cannot work without access to a wide range of computer- aided
design (CAD) tools. These tools synthesize logic, create layouts, simulate, and
verify designs. CAD tools are generally licensed and you must pay a yearly fee to
maintain the license. A license for a single copy of one tool, such as logic
synthesis, may cost as much as $50,000 US.
 The CAD tools require a large compute farm on which to run. During the most
intensive part of the design process, the design team will keep dozens of
computers running continuously for weeks or months.
 A large ASIC, which contains millions of transistors but is not fabricated on the
state-of-the-art process, can easily cost $20 million US and as much as $100
million. Designing a large microprocessor costs hundreds of millions of dollars.

DESIGN COSTS AND IP


We can spread these design costs over more chips if we can reuse all or part of the
design in other chips. The high cost of design is the primary motivation for the rise of
IP-based design, which creates modules that can be reused in many different designs.

5.13 TYPES OF CHIPS


The preponderance of standard parts pushed the problems of building customized
systems back to the board-level designers who used the standard parts.
Since a function built from standard parts usually requires more components than
if the function were built with custom designed ICs, designers tended to build smaller,
simpler systems. The industrial trend, however, is to make available a wider variety of
integrated circuits. The greater diversity of chips includes:

More specialized standard parts:

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

In the 1960s, standard parts were logic gates; in the 1970s they were LSI
components. Today, standard parts include fairly specialized components: communication
network interfaces, graphics accelerators, floating point processors. All these parts are
more specialized than microprocessors but are used in enough volume that designing
special-purpose chips is worth the effort.
In fact, putting a complex, high-performance function on a single chip often
makes other applications possible for example, single-chip floating point processors make
high-speed numeric computation available on even inexpensive personal computers.

• Application-specific integrated circuits (ASICs)


Rather than build a system out of standard parts, designers can now create a single
chip for their particular application. Because the chip is specialized, the functions of
several standard parts can often be squeezed into a single chip, reducing system size,
power, heat, and cost. Application-specific ICs are possible because of computer tools
that help humans design chips much more quickly.
• Systems-on-chips (SoCs).
Fabrication technology has advanced to the point that we can put a complete
system on a single chip. For example, a single-chip computer can include a CPU, bus, I/O
devices, and memory. SoCs allow systems to be made at much lower cost than the
equivalent board-level system. SoCs can also be higher performance and lower power
than board-level equivalents because on-chip connections are more efficient than chip-to
chip connections.
A wider variety of chips is now available in part because fabrication methods are
better understood and more reliable. More importantly, as the number of transistors per
chip grows, it becomes easier and cheaper to design special-purpose ICs. When only a
few transistors could be put on a chip, careful design was required to ensure that even
modest functions could be put on a single chip. Today’s VLSI manufacturing processes,
which can put millions of carefully-designed transistors on a chip, can also be used to put
tens of thousands of less-carefully designed transistors on a chip.
Even though the chip could be made smaller or faster with more design effort, the
advantages of having a single-chip implementation of a function that can be quickly
designed often outweigh the lost potential performance.
The problem and the challenge of the ability to manufacture such large chips is
design the ability to make effective use of the millions of transistors on a chip to perform
a useful function.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

5.14 CMOS TECHNOLOGY:


CMOS is the dominant integrated circuit technology. In this section we will
introduce some basic concepts of CMOS to understand why it is so widespread and some
of the challenges introduced by the inherent characteristics of CMOS.

5.14.1 POWER CONSUMPTION


POWER CONSUMPTION CONSTRAINTS:
The huge chips that can be fabricated today are possible only because of the
relatively tiny consumption of CMOS circuits. Power consumption is critical at the chip
level because much of the power is dissipated as heat, and chips have limited heat
dissipation capacity. Even if the system in which a chip is placed can supply large
amounts of power, most chips are packaged to dissipate fewer than 10 to 15 Watts of
power before they suffer permanent damage (though some chips dissipate well over 50
Watts thanks to special packaging).
The power consumption of a logic circuit can, in the worst case, limit the number
transistors we can effectively put on a single chip. Limiting the number of transistors per
chip changes system design in several ways. Most obviously, it increases the physical size
of a system. Using high-powered circuits also increases power supply and cooling
requirements.
A more subtle effect is caused by the fact that the time required to transmit a
signal between chips is much larger than the time required to send the same signal
between two transistors on the same chip; as a result, some of the advantage of using a
higher-speed circuit family is lost. Another subtle effect of decreasing the level of
integration is that the electrical design of multi-chip systems is more complex:
microscopic wires on-chip exhibit parasitic resistance and capacitance, while macroscopic
wires between chips have capacitance and inductance, which can cause a number of
ringing effects that are much harder to analyze.
The close relationship between power consumption and heat makeslow-power
design techniques important knowledge for every CMOS designer. Of course, low-energy
design is especially important in battery-operated systems like cellular telephones.
Energy, in contrast, must be saved by avoiding unnecessary work.
We will see throughout the rest of this book that minimizing power and energy
consumption requires careful attention to detail at every level of abstraction, from system
architecture down to layout.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

As CMOS features become smaller, additional power consumption mechanisms


come into play. Traditional CMOS consumes power when signals change but consumes
only negligible power when idle. In modern CMOS, leakage mechanisms start to drain
current.

5.14.2 DESIGN AND TESTABILITY


DESIGN VERIFICATION
Our ability to build large chips of unlimited variety introduces the problem of
checking whether those chips have been manufactured correctly. Designers accept the
need to verify or validate their designs to make sure that the circuits perform the specified
function. (Some people use the terms verification and validation interchangeably; a finer
distinction reserves verification for formal proofs of correctness, leaving validation to
mean any technique which increases confidence in correctness, such as simulation.)

Chip designs are simulated to ensure that the chip’s circuits compute the proper
functions to a sequence of inputs chosen to exercise the chip. Manufacturing test but each
chip that comes off the manufacturing line must also undergo
Manufacturing test:
The chip must be exercised to demonstrate that no manufacturing defects rendered
the chip useless. Because IC manufacturing tends to introduce certain types of defects and
because we want to minimize the time required to test each chip, we can’t just use the
input sequences created for design verification to perform manufacturing test. Each chip
must be designed to be fully and easily testable. Finding out that a chip is bad only after
you have plugged it into a system is annoying at best and dangerous at worst. Customers
are unlikely to keep using manufacturers who regularly supply bad chips.
Defects introduced during manufacturing range from the catastrophic
contamination that destroys every transistor on the wafer to the subtle a single broken
wire or a crystalline defect that kills only one transistor. While some bad chips can be
found very easily, each chip must be thoroughly tested to find even subtle flaws that
produce erroneous results only occasionally. Tests designed to exercise functionality and
expose design bugs don’t always uncover manufacturing defects. We use fault models to
identify potential manufacturing problems and determine how they affect the chip’s
operation.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

The most common fault model is stuck-at-0/1: the defect causes a logic gate’s
output to be always 0 (or 1), independent of the gate’s input values. We can often
determine whether a logic gate’s output is stuck even if we can’t directly observe its
outputs or control its inputs. We can generate a good set of manufacturing tests for the
chip by assuming each logic gate’s output is stuck at 0 (then 1) and finding an input to the
chip which causes different outputs when the fault is present or absent.

5.14.3 TESTABILITY AS A DESIGN PROCESS:


Unfortunately, not all chip designs are equally testable. Some faults may require
long input sequences to expose; other faults may not be testable at all, even though they
cause chip malfunctions that aren’t covered by the fault model. Traditionally, chip
designers have ignored testability problems, leaving them to a separate test engineer who
must find a set of inputs to adequately test the chip. If the test engineer can’t change the
chip design to fix testability problems, his or her job becomes both difficult and
unpleasant. The result is often poorly tested chips whose manufacturing problems are
found only after the customer has plugged them into a system.
Companies now recognize that the only way to deliver high-quality chips to
customers is to make the chip designer responsible for testing, just as the designer is
responsible for making the chip run at the required speed. Testability problems can often
be fixed easily early in the design process at relatively little cost in area and performance.
But modern designers must understand testability requirements, analysis techniques
which identify hard-to-test sections of the design, and design techniques which improve
testability.
5.14.4 RELIABILITY
RELIABILITY IS A LIFETIME PROBLEM
Earlier generations of VLSI technology were robust enough that testing chips at
manufacturing time was sufficient to identify working partsa chip either worked or it
didn’t. In today’s nanometer-scale technologies, the problem of determining whether a
chip works is more complex. A number of mechanisms can cause transient failures that
cause occasional problems but are not repeatable. Some other failure mechanisms, like
overheating, cause permanent failures but only after the chip have operated for some
time. And more complex manufacturing problems cause problems that are harder to
diagnose and may affect performance rather than functionality.

5.14.5 DESIGN-FOR MANUFACTURABILITY

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

A number of techniques, referred to as design-for-manufacturability or design-for-


yield, are in use today to improve the reliability of chips that come off the manufacturing
line. We can make chips more reliable by designing circuits and architectures that reduce
design stresses and check for problems. For example, heat is one major cause of chip
failure. Proper power management circuitry can reduce the chip’s heat dissipation and
reduce the damage caused by overheating. We also need to change the way we design
chips.
Some of the convenient levels of abstraction that served us well in earlier
technologies are no longer entirely appropriate in nanometer technologies. We need to
check more thoroughly and be willing to solve reliability problems by modifying design
decisions made earlier.

5.14.6 INTEGRATED CIRCUIT DESIGN TECHNIQUES


To make use of the flood of transistors given to us by Moore’s Law, we must
design large, complex chips quickly. The obstacle to making large chips work correctly is
complexity many interesting ideas for chips have died in the swamp of details that must
be made correct before the chip actually works. Integrated circuit design is hard because
designers must juggle several different problems:
• Multiple levels of abstraction:
IC design requires refining an idea through many levels of detail. Starting from a
specification of what the chip must do, the designer must create an architecture which
performs the required function, expand the architecture into a logic design, and further
expand the logic design into a layout like the one in Figure 1-2. As you will learn by the
end of this book, the specification-to-layout design process is a lot of work.
• Multiple and conflicting costs:
In addition to drawing a design through many levels of detail, the designer must
also take into account costs not dollar costs, but criteria by which the quality of the design
is judged. One critical cost is the speed at which the chip runs. Two architectures that
execute the same function (multiplication, for example) may run at very different speeds.
We will see that chip area is another critical design cost: The cost of manufacturing a chip
is exponentially related to its area, and chips much larger than 1 cm2 cannot be
manufactured at all. Furthermore, if multiple cost criteria such as area and speed
requirements must be satisfied, many design decisions will improve one cost metric at the
expense of the other. Design is dominated by the process of balancing conflicting
constraints.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

• Short design time:


In an ideal world, a designer would have time to contemplate the effect of a design
decision. We do not, however, live in an ideal world. Chips which appear too late may
make little or no money because competitors have snatched market share. Therefore,
designers are under pressure to design chips as quickly as possible. Design time is
especially tight in application-specific IC design, where only a few weeks may be
available to turn a concept into a working ASIC.

5.15 FIELD-PROGRAMMABLE GATE ARRAYS(FPGA):


A field-programmable gate array (FPGA) is a block of programmable logic that
can implement multi-level logic functions. FPGAs are most commonly used as separate
commodity chips that can be programmed to implement large functions.
However, small blocks of FPGA logic can be useful components on-chip to allow the
user of the chip to customize part of the chip’s logical function. An FPGA block must
implement both combinational logic functions and interconnect to be able to construct
multi-level logic functions. There are several different technologies for programming
FPGAs, but most logic processes are unlikely to implement anti-fuses or similar hard
programming technologies, so we will concentrate on SRAM-programmed FPGAs.

5.15.1 LOOKUP TABLES:


The basic method used to build a combinational logic block (CLB) also called a
logic element in an SRAM-based FPGA is the lookup table (LUT). As shown in Figure,
the lookup table is an SRAM that is used to implement a truth table.
Each address in the SRAM represents a combination of inputs to the logic
element. The value stored at that address represents the value of the function for that input
Combination an input function requires an SRAM with location.

Figure 5.1 Lookup Tables

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

Because a basic SRAM is not clocked, the lookup table logic element operates
much as any other logic gate as its input’s changes, its output changes after some delay.
Each address in the SRAM represents a combination of inputs to the logic element. The
value stored at that address represents the value of the function for that input Combination
an input function requires an SRAM with location.

5.15.2 PROGRAMMING A LOOKUP TABLE:


Unlike a typical logic gate, the function represented by the logic element can be
changed by changing the values of the bits stored in the SRAM. As a result, the n-input
logic element can represent functions (though some of these functions are permutations of
each other).

Figure 5.2Programming A Lookup Table

A typical logic element has four inputs. The delay through the lookup table is
independent of the bits stored in the SRAM, so the delay through the logic element is the
same for all functions. This means that, for example, a lookup table-based logic element
will exhibit the same delay for a 4-input XOR and a 4-input NAND.
In contrast, a 4-input XOR built with static CMOS logic is considerably slower
than a 4-input NAND. Of course, the static logic gate is generally faster than the logic
element. Logic elements generally contain registers flip-flops and latches as well as
combinational logic.
A flip-flop or latch is small compared to the combinational logic element (in sharp
contrast to the situation in custom VLSI), so it makes sense to add it to the combinational
logic element. Using a separate cell for the memory element would simply take up
routing resources. The memory element is connected to the output; whether it stores a
given value is controlled by its clock and enable inputs.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

5.15.3COMPLEX LOGIC ELEMENT:


Many FPGAs also incorporate specialized adder logic in the logic element. The
critical component of an adder is the carry chain, which can be implemented.
Much more efficiently in specialized logic than it can using standard lookup table
techniques. The wiring channels that connect to the logic elements’ inputs and outputs
also need to be programmable. A wiring channel has a number of programmable
connections such that each input or output generally can be connected to any one of
several different wires in the channel.
5.15.4 PROGRAMMABLE INTERCONNECTION POINTS:
Simple version of an interconnection point, often known as a connection box.

Figure 5.3 Programming A Lookup Table


A programmable connection between two wires is made by a CMOS transistor (a
pass transistor). The pass transistor’s gate is controlled by a static memory program bit
(shown here as a D register). When the pass transistor’s gate is high, the transistor
conducts and connects the two wires; when the gate is low, the transistor is off and the
two wires are not connected.
A flip-flop or latch is small compared to the combinational logic element (in sharp
contrast to the situation in custom VLSI), so it makes sense to add it to the combinational
logic element. Using a separate cell for the memory element would simply take up
routing resources. The memory element is connected to the output; whether it stores a
given value is controlled by its clock and enable inputs.

The wiring channels that connect to the logic elements’ inputs and outputs also
need to be programmable. A wiring channel has a number of programmable connections
such that each input or output generally can be connected to any one of several different
wires in the channel.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

CHAPTER-6
TOOLS
6.1 Introduction:
The main tools required for this project can be classified into two broad categories.
 Hardware requirement
 Software requirement

6.2 Hardware Requirements:


 FPGA KIT
In the hardware part a normal computer where Xilinx ISE 10.1i software can be
easily operated is required, i.e., with a minimum system configuration Pentium III, 1 GB
RAM, 20 GB Hard Disk.
6.3 Software Requirements:
 MODELSIM 6.4b
 XILINX 10.1
It requires Xilinx ISE 10.1 version of software where Verilog source code can be used for
design implementation.
6.3.1 Introduction to Modelsim:
 In Modelsim, all designs are compiled into a library. You typically start a new
simulation in Modelsim by creating a working library called "work". "Work" is the
library name used by the compiler as the default destination for compiled design units.
 Compiling Your Design: After creating the working library, you compile your design
units into it. The ModelSim library format is compatible across all supported
platforms. You can simulate your design on any platform without having to recompile
your design.
 Loading the Simulator with Your Design and Running the Simulation With the design
compiled, you load the simulator with your design by invoking the simulator on a top-
level module (Verilog) or a configuration or entity/architecture pair (VHDL).
Assuming the design loads successfully, the simulation time is set to zero, and you
enter a run command to begin simulation.
 Debugging your results if you don’t get the results you expect, you can use
ModelSim’s robust debugging environment to track down the cause of the problem.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

Standards Supported:

ModelSim VHDL supports both the IEEE 1076-1987 and 1076-1993 VHDL, the
1164-1993 Standard Multi value Logic System for VHDL Interoperability, and the
1076.2-1996 Standard VHDL Mathematical Packages standards. Any design developed
with ModelSim will be compatible with any other VHDL system that is compliant with
either IEEE Standard 1076-1987 or 1076-1993. ModelSim Verilog is based on IEEE Std
1364-1995 and a partial implementation of 1364-2001, Standard Hardware Description
Language Based on the Verilog Hardware Description Language. The Open Verilog
International Verilog LRM version 2.0 is also applicable to a large extent. Both PLI
(Programming Language Interface) and VCD (Value Change Dump) are supported for
ModelSim PE and SE users.
6.4 MODELSIM:
Basic Steps for Simulation

This section provides further detail related to each step in the process of
simulating your design using ModelSim.

Step 1 - Collecting Files and Mapping Libraries

Files needed to run ModelSim on your design:

 design files (VHDL, Verilog, and/or SystemC), including stimulus for the design
 libraries, both working and resource
 modelsim.ini (automatically created by the library mapping command)

Providing stimulus to the design

You can provide stimulus to your design in several ways:

 Language based testbench


 Tcl-based ModelSim interactive command, force
 VCD files / commands
 See "Using extended VCD as stimulus" (UM-458) and "Using extended VCD as
stimulus"
 3rd party test bench generation tools

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

What is a library in ModelSim?

A library is a location where data to be used for simulation is stored. Libraries are
ModelSim’s way of managing the creation of data before it is needed for use in
simulation. It also serves as a way to streamline simulation invocation. Instead of
compiling all design data each and every time you simulate, ModelSim uses binary pre-
compiled data from these libraries. So, if you make changes to a single Verilog module,
only that module is recompiled, rather than all modules in the design.

Working and resource libraries

Design libraries can be used in two ways: 1) as a local working library that
contains the compiled version of your design; 2) as a resource library. The contents of
your working library will change as you update your design and recompile. A resource
library is typically unchanging, and serves as a parts source for your design. Examples of
resource libraries might be: shared information within your group, vendor libraries,
packages, or previously compiled elements of your own working design.
You can create your own resource libraries, or they may be supplied by another
design team or a third party (e.g., a silicon vendor). For more information on resource
libraries and working libraries, see "Working library versus resource libraries",
"Managing library contents", "Working with design libraries, and "Specifying the
resource libraries".
Creating the Logical Library – vlib

Before you can compile your source files, you must create a library in which to
store the compilation results. You can create the logical library using the GUI, using File
> New > Library (see "Creating a library"), or you can use the vlib command. For
example, the command:
vlib work creates a library named work. By default, compilation results are stored in the
work library
Mapping the Logical Work to The Physical Work Directory – vmap
VHDL uses logical library names that can be mapped to ModelSim
library directories. If libraries are not mapped properly, and you invoke your
simulation, necessary components will not be loaded and simulation will fail.
Similarly, compilation can also depend on proper library mapping.

[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

By default, ModelSim can find libraries in your current directory


(assuming they have the right name), but for it to find libraries located
elsewhere, you need to map a logical library name to the pathname of the
library. You can use the GUI ("Library mappings with the GUI", a
command ("Library mappings with the GUI”), or a project ("Getting started
with projects" to assign a logical name to a design library.
The format for command line entry is:
vmap<logical_name><directory_pathname>
This command sets the mapping between a logical library name and a directory.
Step 2 - Compiling the design with vlog/vcom/sccom
Designs are compiled with one of the three language compilers.
Compiling Verilog - vlog
ModelSim’s compiler for the Verilog modules in your design is vlog. Verilog
files may be compiled in any order, as they are not order dependent. See "Compiling
Verilog files" for details. Verilog portions of the design can be optimized for better
simulation performance.
"Optimizing Verilog designs".

Compiling VHDL – vcom

ModelSim’s compiler for VHDL design units is vcom. VHDL files must be
compiled according to the design requirements of the design. Projects may assist you
in determining the compile order: for more information, see “Auto-generating
compile order" (UM-46). See "Compiling VHDL files" (UM-73) for details. on
VHDL compilation.
Compiling SystemC – sccom
ModelSim’s compiler for SystemC design units is sccom, and is used only if
you have SystemC components in your design. See "Compiling SystemC files" for
details.
Step 3 - Loading the design for simulation
vsim<top>
Your design is ready for simulation after it has been compiled and (optionally)
optimized with vopt. For more information on optimization, see Optimizing Verilog
designs. You may then invoke vsim with the names of the top-level modules (many
designs contain only one top-level module) or the name you assigned to the optimized

M.TECH [VLSI System Design], CMRIT Page 46


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

version of the design.


For example, if your top-level modules are "testbench" and "globals", then
invoke the simulator as follows:
vsim testbench globals
After the simulator loads the top-level modules, it iteratively loads the
instantiated modules and UDPs in the design hierarchy, linking the design together by
connecting the ports and resolving hierarchical references.
Using SDF
You can incorporate actual delay values to the simulation by applying SDF
back annotation files to the design. For more information on how SDF is used in the
design, see "Specifying SDF files for simulation”.
Step 4 - Simulating the design
Once the design has been successfully loaded, the simulation time is set
to zero, and you must enter a run command to begin simulation. For more
information, see Verilog simulation, VHDL simulation, and SystemC
simulation.
The basic simulator commands are:
add
wave
force
bp
run
step
next
Step 5- Debugging the Design
Numerous tools and windows useful in debugging your design are available
from the ModelSim GUI. For more information, see Waveform analysis (UM-237),
PSL Assertions and Tracing signals with the Dataflow window. In addition, several
basic simulation commands are available from the command line to assist you in
debugging your design:
describe
drivers
examine
force

M.TECH [VLSI System Design], CMRIT Page 47


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

log
checkpoint
restore
show

A programmable connection between two wires is made by a CMOS transistor


(a pass transistor). The pass transistor’s gate is controlled by a static memory program
bit (shown here as a D register). When the pass transistor’s gate is high, the transistor
conducts and connects the two wires; when the gate is low, the transistor is off and the
two wires are not connected.
A flip-flop or latch is small compared to the combinational logic element (in
sharp contrast to the situation in custom VLSI), so it makes sense to add it to the
combinational logic element. Using a separate cell for the memory element would
simply take up routing resources. The memory element is connected to the output;
whether it stores a given value is controlled by its clock and enable inputs.
The wiring channels that connect to the logic elements’ inputs and outputs also
need to be programmable. A wiring channel has a number of programmable
connections such that each input or output generally can be connected to any one of
several different wires in the channel. On the left side of the interface, under the
project tab is the frame listing of the files that pertain to the opened project. The
Library frame lists the entities of the project (that have been complied). To the right is
the ModelSim shell frame. It is an extension of MS-DOS, so both ModelSim and MS-
DOS commands can be executed
Numerous tools and windows useful in debugging your design are available from the
ModelSim GUI. For more information, see Waveform analysis (UM-237), PSL
Assertions and Tracing signals with the Dataflow window. In addition, several basic
simulation commands are available from the command line to assist you in debugging
your design:
You can incorporate actual delay values to the simulation by applying SDF
back annotation files to the design. For more information on how SDF is used in the
design, see "Specifying SDF files for simulation unchanging, and serves as a parts
source for your design. Examples of resource libraries might be shared information
within your group, vendor libraries, packages, or previously compiled elements of
your own working design.

M.TECH [VLSI System Design], CMRIT Page 48


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

You can create your own resource libraries, or they may be supplied by
another design team or a third party (e.g., a silicon vendor). For more information on
resource libraries and working libraries, see "Working library versus resource
libraries", "Managing library contents", "Working with design libraries, and
"Specifying the resource library.

6.5 MODELSIM BASICS


On the left side of the interface, under the project tab is the frame listing of the
files that pertain to the opened project. The Library frame lists the entities of the
project (that have been complied). To the right is the ModelSim shell frame. It is an
extension of MS-DOS, so both ModelSim and MS-DOS commands can be executed.
A. Creating a New Project
Once ModelSim has been started, create a new project:
File > New > Project…
Name the project FA, set the project location
to F:/VHDL , and click OK.
A new window should appear to add new files to
the project. Choose Create New File.

Enter F:\VHDL\FA.vhd as the file name and click OK.


Then close the “add new files” window.
Additional files can be added later by choosing from the menu: Project > Add File to
Project

M.TECH [VLSI System Design], CMRIT Page 49


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

B. Editing Source Files


Double click the FA.vhd source found under the Workspace window’s project
tab. This will open up an empty text editor configured to highlight VHDL syntax.
Copy the source found at the end of this document. After writing the code for the
entity and its architecture, save and close the source file.
C. Compiling projects
Select the file from the project files list frame and right click on it. Select
compile to just compile. This files or compile all for the files in the current project. If
there are errors within the code or the project, a red failure message will be displayed.
Double click these red errors for more detailed errors. Otherwise, if all is well then, no
red warning or error messages will be displayed. A new window will be displayed
listing the design entity’s signals and their initial value (shown below). Items in
waveform and listing are ordered in the same order in which they are declared in the
code.
To display the waveform, select the signals for the waveform to display (hold
CTL and click to select multiple signals) and from the signal list window menu select.
In this case, it is possible to use Verilog to write a  test bench to verify the
functionality of the design using files on the host computer to define stimuli, to
interact with the user, and to compare results with those expected.
A Verilog model is translated into the "gates and wires" that are mapped onto
a programmable logic device such as a CPLD or FPGA, and then it is the actual
hardware being configured, rather than the Verilog code being "executed" as if on
some form of a processor chip. In this case, it is possible to use Verilog to write a  test
bench to verify the functionality of the design using files on the host computer to

M.TECH [VLSI System Design], CMRIT Page 50


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

define stimuli, to interact with the user, and to compare results with those expected.
FPGA stands for Field Programmable Gate Array which has the array of logic
module, I /O module and routing tracks (programmable interconnect). FPGA can be
configured by end user to implement specific circuitry. Speed is up to 100 MHz but at
present speed is in GHz.
Main applications are DSP, FPGA based computers, logic emulation, ASIC
and ASSP. FPGA can be programmed mainly on SRAM (Static Random Access
Memory). It is Volatile and main advantage of using SRAM programming technology
is re-configurability. Issues in FPGA technology are complexity of logic element,
clock support, IO support and interconnections (Routing).

II. Simulating with ModelSim

To simulate, first the entity design has to be loaded into the simulator. Do this
by selecting from the menu:
Simulate > Simulate
A new window will appear listing all the entities (not filenames) that are in the
work library. Select FA entity for simulation and click OK.

Often times it will be necessary to create entities with multiple architectures.


In this case the architecture has to be specified for the simulation. Expand the tree for
the entity and select the architecture to be simulated and then click OK.
Creating test files for the simulator After the design is loaded, clear up any previous
data and restart the timer by typing in the Prompt:
View > Signals
A new window will be displayed listing the design entity’s signals and their
initial value (shown below). Items in waveform and listing are ordered in the same

M.TECH [VLSI System Design], CMRIT Page 51


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

order in which they are declared in the code. To display the waveform, select the
signals for the waveform to display (hold CTL and click to select multiple signals)
and from the signal list window menu select:
Add > Wave > Selected signals

Basic Simulation Flow:


The following diagram shows the basic steps for simulating a design in ModelSim.

Create a working library

Compile design files

Load and Run

Debug results

Figure 6.1 Basic simulation flow

Project design flow:


As you can see, the flow is similar to the basic simulation flow. However, there are
two important differences:
 You do not have to create a working library in the project flow; it is done for
you

M.TECH [VLSI System Design], CMRIT Page 52


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

Automatically.
 Projects are persistent. In other words, they will open every time you invoke
Modelsim unless you specifically close them.
The following diagram shows the basic steps for simulating a design within a
Modelsim project.

Figure: 6.2 Project Design F low

6.6 Introduction to XILINX ISE:


This tool can be used to create, implement, simulate, and synthesize Verilog
designs for implementation on FPGA chips.
ISE: Integrated Software Environment
 Environment for the development and test of digital systems design targeted to
FPGA or CPLD
 Integrated collection of tools accessible through a GUI
 Based on a logical synthesis engine (XST: Xilinx Synthesis Technology)
 XST supports different languages:
 Verilog
 VHDL
 XST produce a net list integrated with constraints
 Supports all the steps required to complete the design:
 Translate, map, place and route
 Bit stream generation

M.TECH [VLSI System Design], CMRIT Page 53


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

In this case, it is possible to use Verilog to write a test bench to verify the
functionality of the design using files on the host computer to define stimuli, to
interact with the user, and to compare results with those expected.

A Verilog model is translated into the "gates and wires" that are mapped onto a
programmable logic device such as a CPLD or FPGA, and then it is the actual
hardware being configured, rather than the Verilog code being "executed" as if on
some form of a processor chip.

6.6.1 Implementation:
 Synthesis (XST)
– Produce a netlist file starting from an HDL description
 Translate (NGDBuild)
– Converts all input design netlists and then writes the results into a single
merged file,
that describes logic and constraints.
 Mapping (MAP)
– Maps the logic on device components.
– Takes a netlist and groups the logical elements into CLBs and IOBs
(components of FPGA).
 Place and Route (PAR)
– Place FPGA cells and connects cells.
 Bit stream generation
XILINX Design Process
Step 1: Design entry
– HDL (Verilog or VHDL, ABEL x CPLD), Schematic Drawings, Bubble
Diagram
Step 2: Synthesis
– Translates.v, .vhd, .sch files into a netilist file (.ngc)
Step 3: Implementation
– FPGA: Translate/Map/Place & Route, CPLD: Fitter
Step 4: Configuration/Programming
– Download a BIT file into the FPGA
– Program JEDEC file into CPLD

M.TECH [VLSI System Design], CMRIT Page 54


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

– Program MCS file into Flash PROM


Simulation can occur after steps 1, 2, 3
6.7 Introduction to FPGA:
FPGA stands for Field Programmable Gate Array which has the array of logic
module, I /O module and routing tracks (programmable interconnect). FPGA can be
configured by end user to implement specific circuitry. Speed is up to 100 MHz but at
present speed is in GHz.
Main applications are DSP, FPGA based computers, logic emulation, ASIC
and ASSP. FPGA can be programmed mainly on SRAM (Static Random Access
Memory). It is Volatile and main advantage of using SRAM programming technology
is re-configurability. Issues in FPGA technology are complexity of logic element,
clock support, IO support and interconnections (Routing).
FPGA Design Flow
FPGA contains a two-dimensional arrays of logic blocks and interconnections
between logic blocks. Both the logic blocks and interconnects are programmable.
Logic blocks are programmed to implement a desired function and the interconnects
are programmed using the switch boxes to connect the logic blocks.
To be clearer, if we want to implement a complex design (CPU for instance),
then the design is divided into small sub functions and each sub function is
implemented using one logic block. Now, to get our desired design (CPU), all the sub
functions implemented in logic blocks must be connected and this is done by
programming the interconnects.
FPGAs, alternative to the custom ICs, can be used to implement an entire
System On one Chip (SOC). The main advantage of FPGA is ability to reprogram.
User can reprogram an FPGA to implement a design and this is done after the FPGA
is manufactured. This brings the name “Field Programmable.”

Internal structure of an FPGA is depicted in the following figure.

M.TECH [VLSI System Design], CMRIT Page 55


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

Figure 6.3 Internal structure of FPGA

Custom ICs are expensive and takes long time to design so they are useful when
produced in bulk amounts. But FPGAs are easy to implement within a short time with
the help of Computer Aided Designing (CAD) tools (because there is no physical
layout process, no mask making, and no IC manufacturing).
Some disadvantages of FPGAs are, they are slow compared to custom ICs as
they can’t handle vary complex designs and also, they draw more power. Xilinx logic
block consists of one Look Up Table (LUT) and one Flip Flop.
An LUT is used to implement number of different functionalities. The input lines to
the logic block go into the LUT and enable it. The output of the LUT gives the result
of the logic function that it implements and the output of logic block is registered or
unregistered output from the LUT.SRAM is used to implement a LUT.A k-input logic
function is implemented using 2^k * 1 size SRAM. Number of different possible
functions for k input LUT is 2^2^k.
Advantage of such architecture is that it supports implementation of so many
logic functions, however the disadvantage is unusually large number of memory cells
required to implement such a logic block in case number of inputs is large.

M.TECH [VLSI System Design], CMRIT Page 56


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

Figure 6.4 4-input LUT based implementation of logic block

In technology mapping, the transformation of optimized Boolean expression to FPGA


logic blocks, that is said to be as Slices. Here area and delay optimization will be
taken place. During placement the algorithms are used to place each block in FPGA
array. Assigning the FPGA wire segments, which are programmable, to establish
connections among FPGA blocks through routing.
The configuration of final chip is made in programming unit LUT based design
provides for better logic block utilization. A k-input LUT based logic block can be
implemented in number of different ways with tradeoff between performance and
logic density. An n-LUT can be shown as a direct implementation of a function truth-
table. Each of the latches holds the value of the function corresponding to one input
combination.
For Example: 2-LUT can be used to implement 16 types of functions like AND, OR,
A +not B .... Etc.

A          B      AND     OR       .....       ......      ......

0          0          0          0

0          1          0          1

1          0          0          1

M.TECH [VLSI System Design], CMRIT Page 57


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

1          1          1          1

6.8 Interconnects:
A wire segment can be described as two end points of an interconnect
with no programmable switch between them. A sequence of one or more wire
segments in an FPGA can be termed as a track.

Typically, an FPGA has logic blocks, interconnects and switch blocks (Input/output
blocks). Switch blocks lie in the periphery of logic blocks and interconnect. Wire
segments are connected to logic blocks through switch blocks. Depending on the
required design, one logic block is connected to another and so on.

6.9 FPGA DESIGN FLOW:


In this part of tutorial, we are going to have a short intro on FPGA design flow. A
simplified version of design flow is given in the flowing diagram.

Figure 6.5 FPGA Design Flow

6.10 Design Entry:


There are different techniques for design entry. Schematic based, Hardware
Description Language and combination of both etc. Selection of a method depends on
the design and designer. If the designer wants to deal more with Hardware, then
Schematic entry is the better choice. When the design is complex or the designer
thinks the design in an algorithmic way then HDL is the better choice. Language
based entry is faster but lag in performance and density.

M.TECH [VLSI System Design], CMRIT Page 58


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

HDLs represent a level of abstraction that can isolate the designers from the
details of the hardware implementation.  Schematic based entry gives designers much
more visibility into the hardware. It is the better choice for those who are hardware
oriented. Another method but rarely used is state-machines.

It is the better choice for the designers who think the design as a series of
states. But the tools for state machine entry are limited. In this documentation we are
going to deal with the HDL based design entry.
6.11 Synthesis:
The process which translates VHDL or Verilog code into a device netlist
format. i.e. a complete circuit with logical elements (gates, flip flops, etc…) for the
design. If the design contains more than one sub designs, ex. to implement  a
processor, we need a CPU as one design element and RAM as another and so on, then
the synthesis process generates netlist for each design element Synthesis process will
check code syntax and analyze the hierarchy of the design which ensures that the
design is optimized for the design architecture, the designer has selected. The
resulting netlist(s) is saved to an NGC (Native Generic Circuit) file (for Xilinx®
Synthesis Technology (XST)).

Figure 6.6 FPGA Synthesis


6.12Implementation: In this work, design of a DWT and IDWT is made using
Verilog HDL and is synthesized on FPGA family of Spartan 3E through XILINX ISE
Tool. This process includes following:
 Translate
 Map
 Place and Route
6.12.1 Translate:

M.TECH [VLSI System Design], CMRIT Page 59


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

Process combines all the input netlists and constraints to a logic design file.
This information is saved as a NGD (Native Generic Database) file. This can be done
using NGD Build program. Here, defining constraints is nothing but, assigning the
ports in the design to the physical elements (ex. pins, switches, buttons etc.) of the
targeted device and specifying time requirements of the design. This information is
stored in a file named UCF (User Constraints File). Tools used to create or modify the
UCF are PACE, Constraint Editor etc.

Figure 6.7 FPGA Translate

6.12.2 Map:
  Process divides the whole circuit with logical elements into sub blocks such
that they can be fit into the FPGA logic blocks. That means map process fits the logic
defined by the NGD file into the targeted FPGA elements (Combinational Logic
Blocks (CLB), Input Output Blocks (IOB)) and generates an NCD (Native Circuit
Description) file which physically represents the design mapped to the components of
FPGA.
MAP program is used for this purpose.

Figure 6.8 FPGA map


6.12.3 Place and Route:

M.TECH [VLSI System Design], CMRIT Page 60


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

PAR program is used for this process. The place and route process places the
sub blocks from the map process into logic blocks according to the constraints and
connects the logic blocks. Ex. if a sub block is placed in a logic block which is very
near to IO pin, then it may save the time but it may affect some other constraint.
So, tradeoff between all the constraints is taken account by the place and route
process
The PAR tool takes the mapped NCD file as input and produces a completely
routed NCD file as output. Output NCD file consists the routing information.

Figure 6.9 FPGA Place and route


6.13 Device Programming:
Now the design must be loaded on the FPGA. But the design must be
converted to a format so that the FPGA can accept it. BITGEN program deals with the
conversion. The routed NCD file is then given to the BITGEN program to generate a
bit stream (a .BIT file) which can be used to configure the target FPGA device. This
can be done using a cable. Selection of cable depends on the design.
6.13.1 Design Verification:
Verification can be done at different stages of the process steps. 
6.13.2 Behavioral Simulation (RTL Simulation):
This is first of all simulation steps; those are encountered throughout the
hierarchy of the design flow. This simulation is performed before synthesis process to
verify RTL (behavioral) code and to confirm that the design is functioning as
intended.
Behavioral simulation can be performed on either VHDL or Verilog designs. In this
process, signals and variables are observed, procedures and functions are traced and
breakpoints are set.
This is a very fast simulation and so allows the designer to change the HDL
code if the required functionality is not met with in a short time period. Since the

M.TECH [VLSI System Design], CMRIT Page 61


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

design is not yet synthesized to gate level, timing and resource usage properties are
still unknown.
6.13.3 Functional simulation (Post Translate Simulation):
Functional simulation gives information about the logic operation of the
circuit. Designer can verify the functionality of the design using this process after the
Translate process. If the functionality is not as expected, then the designer has to
made changes in the code and again follow the design flow steps.

Static Timing Analysis:


This can be done after MAP or PAR processes Post MAP timing report lists
signal path delays of the design derived from the design logic. Post Place and Route
timing report incorporates timing delay information to provide a comprehensive
timing summary of the design. 

M.TECH [VLSI System Design], CMRIT Page 62


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

CHAPTER-7
RESULTS
7.1 SIMULATION RESULT:

SYNTHESIS RESULTS:
The developed project is simulated and verified their functionality. Once the
functional verification is done, the RTL model is taken to the synthesis process using
the Xilinx ISE tool. In synthesis process, the RTL model will be converted to the gate
level netlist mapped to a specific technology library. Here in this Spartan 3E family,
many different devices were available in the Xilinx ISE tool. In order to synthesis this
design the device named as “XC3S500E” has been chosen and the package as
“FG320” with the device speed such as “-4”.
This design is synthesized and its results were analyzed as follows
ENCODER RTL SCHEMATIC:

M.TECH [VLSI System Design], CMRIT Page 63


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

DECODER RTL SCHEMATIC:

NOISE MIXER RTL:

M.TECH [VLSI System Design], CMRIT Page 64


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

TECHNOLOGY SCHEMATIC:

DESIGN SUMMARY:

M.TECH [VLSI System Design], CMRIT Page 65


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

TIMING REPORT:

CHAPTER-8

CONCLUSION

A technique is introduced in this article to improve QAEC's 3-bit BEC codes.


The codes suggested are inconsistent to the same degree as the previous BEC three-bit
codes[23]. A new column checks and capturing functional algorithm is suggested to
speed up the search process for goal matrices. A search framework is built to carry out
the search process automatically based on the proposed algorithm. The validity of the
suggested algorithm is shown by the relevant 3-bit BEC codes[23] and by the two
Optimization Parameters, the codes are improved considerably. In order to find the
solution for QAEC the existing algorithm is then used. The entire search process is
completeThe best solutions are then provided in this article for 16 databits, and for 32
and 64 data bits the best solutions find within an acceptable measurement time. Using
the HDL to execute the encoder and decoder for the proposed codes, synthesised with
a 65-nm library. Overhead and delay relative to previous 3-bit BEC codes [23] was
low. This indicates that programmers can easily use the planned 3-bit BEC with
QAEC codes to shield SRAM memories from radiation and minimise MBUs with up
to four neighbouring bits. Finally, as previously noted, the suggested scheme could be
broadened to 3-bit ECCs that can remedy another.

M.TECH [VLSI System Design], CMRIT Page 66


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

REFERENCES

[1] R. D. Schrimpf and D. M. Fleetwood, Radiation Effects and Soft Errors in


Integrated Circuits and Electronic Devices. Singapore: World Scientific, 2004.

[2] S. K. Vishvakarma, B. S. Reniwal, V. Sharma, C. B. Khuswah, and D. Dwivedi,


“Nanoscale memory design for efficient computation: Trends, challenges and
opportunity,” in Proc. IEEE Int. Syst. Nanoelectron. Inf. Syst., Dec. 2015, pp. 29–34.

[3] S.-Y. Wu, C. Y. Lin, S. H. Yang, J. J. Liaw, and J. Y. Cheng, “Advancing foundry
technology with scaling and innovations,” in Proc. Int. Symp. VLSI-TSA, Apr. 2014,
pp. 1–3.

[4] ITRS. International Technology Roadmap for Semiconductors (ITRS) Report,


Semiconductor Industry Association. Accessed: 2009. [Online]. Available:
http://www.itrs.net

[5] Y. Bentoutou, “A real time EDAC system for applications onboard earth
observation small satellites,” IEEE Trans. Aerosp. Electron. Syst., vol. 48, no. 1, pp.
648–657, Jan. 2012.

[6] E. Ibe, H. Taniguchi, Y. Yahagi, K.-I. Shimbo, and T. Toba, “Impact of scaling on
neutron-induced soft error in SRAMs from a 250 nm to a 22 nm design rule,” IEEE
Trans. Electron Devices, vol. 57, no. 7, pp. 1527–1538, Jul. 2010.

M.TECH [VLSI System Design], CMRIT Page 67


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

[7] L. Artola, M. Gaillardin, G. Hubert, M. Raine, and P. Paillet, “Modeling single


event transients in advanced devices and ICs,” IEEE Trans. Nucl. Sci., vol. 62, no. 4,
pp. 1528–1539, Aug. 2015.

[8] M. Murat, A. Akkerman, and J. Barak, “Electron and ion tracks in silicon: Spatial
and temporal evolution,” IEEE Trans. Nucl. Sci., vol. 55, no. 6, pp. 3046–3054, Dec.
200

[9] R. C. Baumann, “Soft errors in advanced computer systems,” IEEEDesign Test.


Comput., vol. 22, no. 3, pp. 258–266, May/Jun. 2005

10] M. Y. Hsiao, “A class of optimal minimum odd-weight-column SEC-DED


codes,” IBM J. Res. Develop., vol. 14, no. 4, pp. 395–401, Jul. 1970.

[11] J. D. Black, P. E. Dodd, and K. M. Warren, “Physics of multiplenode charge


collection and impacts on single-event characterization and soft error rate prediction,”
IEEE Trans. Nucl. Sci., vol. 60, no. 3, pp. 1836–1851, Jun. 2013.

[12] O. A. Amusan et al., “Charge collection and charge sharing in a 130 nm CMOS
technology,” IEEE Trans. Nucl. Sci., vol. 53, no. 6, pp. 3253–3258, Dec. 2006.

[13] J. D. Black et al., “Characterizing SRAM single event upset in terms of single
and multiple node charge collection,” IEEE Trans. Nucl. Sci., vol. 55, no. 6, pp.
2943–2947, Dec. 2008.

[14] A. D. Tipton et al., “Multiple-bit upset in 130 nm CMOS technology,” IEEE


Trans. Nucl. Sci., vol. 53, no. 6, pp. 3259–3264, Dec. 2006.

[15] W. Calienes, R. Reis, C. Anghel, and A. Vladimirescu, “Bulk and FDSOI SRAM
resiliency to radiation effects,” in Proc. IEEE 57th Int. Midwest Symp Circuits Syst.,
Aug. 2014, pp. 655–658.

[16] D. F. Heidel et al., “Single-event upsets and multiple-bit upsets on a 45 nm SOI


SRAM,” IEEE Trans. Nucl. Sci., vol. 56, no. 6, pp. 3499–3504, Dec. 2009.

M.TECH [VLSI System Design], CMRIT Page 68


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

[17] E. H. Cannon, D. D. Reinhardt, M. S. Gordon, and P. S. Makowensky, “SRAM


SER in 90, 130 and 180 nm bulk and SOI technologies,” in Proc. 42nd Annu. IEEE
Int. Reliab. Phys. Symp., Apr. 2004, pp. 300–304.

[18] M. Raine, G. Hubert, M. Gaillardin, P. Paillet, and A. Bournel, “Monte Carlo


prediction of heavy ion induced MBU sensitivity for SOI SRAMs using radial
ionization profile,” IEEE Trans. Nucl. Sci., vol. 58, no. 6, pp. 2607–2613, Dec. 2011.

[19] A. Dutta and N. A. Touba, “Multiple bit upset tolerant memory using a selective
cycle avoidance based SEC-DED-DAEC code,” in Proc. IEEE VLSI Test Symp.,
May 2007, pp. 349–354.

[20] A. Neale and M. Sachdev, “A new SEC-DED error correction code subclass for
adjacent MBU tolerance in embedded memory,” IEEE Trans. Device Mater. Rel., vol.
13, no. 1, pp. 223–230, Mar. 2013.

[21] Z. Ming, X. L. Yi, and L. H. Wei, “New SEC-DED-DAEC codes for multiple bit
upsets mitigation in memory,” in Proc. IEEE/IFIP 20th Int. Conf. VLSI Syst.-Chip,
Oct. 2011, pp. 254–259.

[22] P. Reviriego, S. Pontarelli, A. Evans, and J. A. Maestro, “A class of SEC-DED-


DAEC codes derived from orthogonal latin square codes,” IEEE Trans. Very Large
Scale Integr. (VLSI) Syst., vol. 23, no. 5, pp. 968–972, May 2015.

[23] L. J. Saiz-Adalid, P. Reviriego, P. Gil, S. Pontarelli, and J. A. Maestro, “MCU


tolerance in SRAMs through low-redundancy triple adjacent error correction,” IEEE
Trans. Very Large Scale Integr. (VLSI) Syst., vol. 23, no. 10, pp. 2332–2336, Oct.
2015.

[24] L. Xiao, J. Li, J. Li, and J. Guo, “Hardened design based on advanced orthogonal
Latin code against two adjacent multiple bit upsets (MBUs) in memories,” in Proc.
16th Int. Symp. Quality Electron. Design, Mar. 2015, pp. 485–489.

[25] R. Naseer and J. Draper, “Parallel double error correcting code design to mitigate
multi-bit upsets in SRAMs,” in Proc. IEEE 34th Eur. SolidState Circuits, Sep. 2008,
pp. 222–225.

M.TECH [VLSI System Design], CMRIT Page 69


Rom Based Quadruple Error Correction for 4-bit Adjacent Errors

[26] M. A. Bajura et al., “Models and algorithmic limits for an ECC-based approach
to hardening sub-100-nm SRAMs,” IEEE Trans. Nucl. Sci., vol. 54, no. 4, pp. 935–
945, Aug. 2007.

[27] M. Y. Hsiao, D. C. Bossen, and R. T. Chien, “Orthogonal Latin square codes,”


IBM J. Res. Develop., vol. 14, no. 4, pp. 390–394, Jul. 1970.

[28] S. Satoh, Y. Tosaka, and S. A. Wender, “Geometric effect of multiple-bit soft


errors induced by cosmic ray neutrons on DRAM’s,” IEEE Electron Device Lett., vol.
21, no. 6, pp. 310–312, Jun. 2000.

[29] A. I. Chumakov, A. V. Sogoyan, A. B. Boruzdina, A. A. Smolin, and A. A.


Pechenkin, “Multiple cell upset mechanisms in SRAMs,” in Proc. 15th Eur. Conf.
Radiation Effects Compon. Syst., Sep. 2015, pp. 1–5.

[30] S. Shamshiri and K.-T. Cheng, “Error-locality-aware linear coding to correct


multi-bit upsets in SRAMs,” in Proc. IEEE Int. Test Conf., Nov. 2010, pp. 1–10.

[31] G. Tsiligiannis et al., “Multiple cell upset classification in commercial SRAMs,”


IEEE Trans. Nucl. Sci., vol. 61, no. 4, pp. 1747–1754, Aug. 2014.

[32] J.-L. Autran, D. Munteanu, G. Gasiot, P. Roche, S. Serre, and S. Semikh, Soft-
Error Rate of Advanced SRAM Memories: Modeling and Monte Carlo Simulation.
Rijeka, Croatia: INTECH, 2012.

M.TECH [VLSI System Design], CMRIT Page 70

You might also like