Professional Documents
Culture Documents
Qaec
Qaec
ABSTRACT
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
CHAPTER-1
INTRODUCTION
Account for a large portion of the circuit area [4]. This makes memories suffer more
space radiation than other components. Therefore, the sensitivity to radiation of memories
has become a critical issue to ensure the reliability of electronic systems. In modern static
random access memories (SRAMs), radiation-induced soft errors in the form of the single
event upset (SEU) and multiple bit upset (MBU) are two prominent single event effects
[5]. As semiconductor technology develops from the submicrometer technology to the
ultradeep submicrometer (UDSM) technology, the size of memory cells is smaller and
more cells are included in the radius affected by a particle [6], [7] as shown in Fig. 1.
When a particle from a cosmic ray hits the basic memory cell, it generates a radial
distribution of electron–hole pairs along the transport track [8]. These generated electron–
hole pairs can cause soft errors by changing the values stored in the memory cell leading
to data corruption and system failure [9]. For the transistors with a large feature size, a
radiation event just affects one memory cell, which means that only the SEU occurs. In
this case, the use of single error-correction (SEC)double error-detection (DED) codes [10]
is enough to protect the memory from radiation effects.
As the feature size enters into DSM range, the critical chargekeeps decreasingand
the area of the memorycell scales down for each successive technology node. This makes
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
more memory cells affected by a particle hit as shown in Fig. 2. For the CMOS bulk
technology, with the cell-to-cell spacing decreasing, the electron–hole pairs generated in
the substrate can diffuse to nearby cells and induce MBUs [11]–[14]. This compares with
the FDSOI technology, which isolates transistors and limits the multicollection
mechanism. Therefore, the multicollection mechanism is more prominent for a bulk
technology, and the MBU probability is higher [15]–[18]. To protect against MBUs,
ECCs that correct adjacent bit errors [19]–[24] or multiple bit errors [25]–[27] are
utilized. Although multiple bit error-correction codes (ECCs) can correct multiple bit
errors in any error patterns not limited to the adjacent bits, the complexity of the decoding
process and the limitation of the code block size limit their use. Meanwhile, from the
generation principle of MBUs in [28], the type of the MBUs depends on the initial angle
of incidence and scattering angle of the secondary particles. Based on this, adjacent bit
errors are dominant error patterns among the multiple bit errors. Therefore, adjacent bits
correction ECCs become popular in memory-hardened designs. Many codes are
proposed, and the capability of adjacent bits correction mainly focuses on the double
adjacent error correction (DAEC) [19]–[22], triple adjacent error correction (TAEC), and
3-bit burst errorcorrection (BEC) [23]. An alternative to codes that can correct adjacent
errors is to use an SEC or SEC-DED code combined with an interleaving of the memory
cells. Interleaving ensures that cells that belong to the same logical word are placed
physically apart. This means that an error on multiple adjacent cells affects multiple
words each having only one bit error that can be corrected by an SEC code. As noted in
previous studies, interleaving makes the interconnections and routing of the memory
more complex and it will lead to an increase area and power consumption or limitations
in the aspect ratio [19], [20]. Therefore, whether it is better to use SEC plus interleaving
or a code that can correct adjacent errors will be design-dependent and both alternatives
are of interest. As the technology comes to the UDSM, the area of the memory cell keeps
further decreasing and even memories having atomic-dimension transistors appear. The
ionization range of ions with the order of magnitude in micrometer can include
morememorycells in the word-linedirectionas shown in Fig. 2 than the three bits
previously considered [29]. This means that the SEC-DAEC-TAEC codes may not be
effective to ensure the memory reliability. Codes with more advanced correction ability
are demanded. For example, codes designed with low redundancyfor SEC-DAEC-TAEC
and 3-bit BEC are presented in [23]. Therefore, extending the correction ability to
quadrupleadjacent bit errors would be of interest, especially if it can be done without
adding extra parity bits. In this paper, we present an improvement of 3-bit BEC codes to
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
also provide quadruple adjacent error correction (QAEC). The code design technique for
the QAEC with
low redundancy is specified from two aspects: 1) error space satisfiability; and 2)
uniquesyndromesatisfiability. Codes with QAEC for 16, 32, and 64 data bits are
presented. From the view of the integrated circuits design, two criteria have been used to
optimize the decoder complexity and decoder delay at the ECCs level: 1) minimizing the
total number of ones in the parity check matrix and 2) minimizing the number of ones in
the heaviest row of the parity check matrix. Additionally, based on the traditional
recursive backtracing algorithm, an algorithm with the function of weight restriction and
recording the past procedure is developed. The new algorithm not only reduces the cost of
program run time, but also remarkably improves the performance of previous 3-bit BEC
codes. The encoders and decoders for the QAEC codes are implementedin Verilog
hardware description language(HDL). Area overhead and delay overhead are obtained by
using a TSMC bulk 65-nm library. Compared with the previous 3-bit BEC codes, the area
and delay over head is moderate to achieve the correction ability extension.
Single bit errors are least likely type of errors in serial data transmission. To see
why, imagine a sender sends data at 10 Mbps. This means that each bit lasts only for 0.1
μs (micro-second). For a single bit error to occur noise must have duration of only 0.1 μs
(micro-second), which is very rare. However, a single-bit error can happen if we are
having a parallel data transmission. For example, if 16 wires are used to send all 16 bits
of a word at the same time and one of the wires is noisy, one bit is corrupted in each
word.
It means 2 or more errors are occurred in data. The term burst error means that
two or more bits in the data unit have changed from 0 to 1 or vice-versa. Note that burst
error doesn’t necessary means that error occurs in consecutive bits. The length of the
burst error is measured from the first corrupted bit to the last corrupted bit. Some bits in
between may not be corrupted.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
Burst errors are mostly likely to happen in serial transmission. The duration of the
noise is normally longer than the duration of a single bit, which means that the noise
affects data; it affects a set of bits as shown in Fig. The number of bits affected depends
on the data rate and duration of noise.
Error Detecting Codes Basic approach used for error detection is the use of
redundancy, where additional bits are added to facilitate detection and correction of
errors. Popular techniques are:
The most common and least expensive mechanism for error- detection is the
simple parity check. In this technique, a redundant bit called parity bit, is appended to
every data unit so that the number of 1s in the unit (including the parity becomes even).
Blocks of data from the source are subjected to a check bit or Parity bit generator form,
where a parity of 1 is added to the block if it contains an odd number of 1’s (ON bits) and
0 is added if it contains an even number of 1’s. At the receiving end the parity bit is
computed from the received data bits and compared with the received parity bit, as shown
in Fig. This scheme makes the total number of 1’s even, that is why it is called even
parity checking.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
1.3 CHECKSUM:
In checksum error detection scheme, the data is divided into k segments each of m
bits. In the sender’s end the segments are added using 1’s complement arithmetic to get
the sum. The sum is complemented to get the checksum.
This Cyclic Redundancy Check is the most powerful and easy to implement
technique. Unlike checksum scheme, which is based on addition, CRC is based on binary
division. In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are
appended to the end of data unit so that the resulting data unit becomes exactly divisible
by a second, predetermined binary number. At the destination, the incoming data unit is
divided by the same number. If at this step there is no remainder, the data unit is assumed
to be correct and is therefore accepted.
This set of 2k code words is called a block code. For a block code to be useful
there should be a one-to-one correspondence between a message u and its code word v. A
desirable structure for a block code to possess is the linearity. With this structure, the
encoding complexity will be greatly reduced.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
CHAPTER2
LITERATURE SURVEY
2.1 Nanoscale memory design for efficient computation: trends, challenges and
opportunity
Abstract:
2.2 A Real Time EDAC System for Applications Onboard Earth Observation Small
Satellites
Abstract:
Onboard error detection and correction (EDAC) devices aim to secure data
transmitted between the central processing unit (CPU) of a satellite onboard computer
(OBC) and its local memory. A follow-up is presented here of some low-complexity
EDAC techniques for application in random access memories (RAMs) onboard the
Algerian microsatellite Alsat-1. The application of a double-bit EDAC method is
described and implemented in field programmable gate array (FPGA) technology. The
performance of the implemented EDAC method is measured and compared with three
different EDAC strategies, using the same FPGA technology. A statistical analysis of
single-event upset (SEU) and multiple-bit upset (MBU) activity in commercial memories
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
onboard Alsat-1 is given. A new architecture of an onboard EDAC device for future Earth
observation small satellite missions in low Earth orbits (LEO) is described.
Authors: Y. Bentoutou
Abstract:
The ability for Single Event Transients (SETs) to induce soft errors in Integrated
Circuits (ICs) was predicted for the first time by Wallmark and Marcus in the early 60's
and was confirmed to be a serious issue thirty years later. In the 90's microelectronic
technologies reached the “deep submicron” era, allowing high density ICs working at
frequencies faster than hundreds of MHz. This new paradigm changed the status of SETs
to become a major source of reliability losses. Huge efforts have thus been made to
characterize SETs in microelectronics, either using experiments or by simulation, in order
to reveal key factors leading to SET occurrence, propagation and capture in modern ICs.
In this context, modeling and simulation are of primary importance to get accurate SET
predictions. This paper focuses on modeling SETs in innovative electronic devices which
involves modeling steps at different scales, from ionizing particle to circuit response.
After a brief review of the state-of-the art of modeling at each scale, this paper will
discuss current capabilities and intrinsic limitations of SET modeling, the incoming
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
challenges in advanced devices and ICs, and finally the methodologies to improve SET
simulation and prediction for future technologies.
2.5 Electron and Ion Tracks in Silicon: Spatial and Temporal Evolution
Abstract:
Calculations based on our Monte Carlo code for the transport of electrons and ions
in silicon are presented. The code follows the trajectories of all secondary electrons down
to a very low cut-off energy (1.5 eV). The spatial and temporal distributions of the
deposited energy around ion tracks in silicon are also calculated. The prompt electrical
fields generated by the charges at the early stage of the track evolution are evaluated,
taking into account carrier recombination. Using the statistics of the energy deposition
events in small sensitive volumes, we find that the SEE cross sections in modern devices
depend on the ion energy, in addition to its LET.
Abstract:
2.7 Characterizing SRAM Single Event Upset in Terms of Single and Multiple Node
Charge Collection
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
Abstract:
Authors:J. D. Black
Abstract:
Authors:A. D. Tipton
CHAPTER3
Error Detecting Codes Basic approach used for error detection is the use of
redundancy, where additional bits are added to facilitate detection and correction of
errors. Popular techniques are:
The most common and least expensive mechanism for error- detection is the
simple parity check. In this technique, a redundant bit called parity bit, is appended to
every data unit so that the number of 1s in the unit (including the parity becomes even).
Blocks of data from the source are subjected to a check bit or Parity bit generator form,
where a parity of 1 is added to the block if it contains an odd number of 1’s (ON bits) and
0 is added if it contains an even number of 1’s. At the receiving end the parity bit is
computed from the received data bits and compared with the received parity bit, as shown
in Fig. This scheme makes the total number of 1’s even, that is why it is called even
parity checking.
Table 3.1.1: Possible 4-bit data words and corresponding code words
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
Note that for the sake of simplicity, we are discussing here the even-parity
checking, where the number of 1’s should be an even number. It is also possible to use
odd-parity checking, where the number of 1’s should be odd.
3.2 CHECKSUM:
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
In checksum error detection scheme, the data is divided into k segments each of m
bits. In the sender’s end the segments are added using 1’s complement arithmetic to get
the sum. The sum is complemented to get the checksum. The checksum segment is sent
along with the data segments as shown in Fig. 3.2.1 (a). At the receiver’s end, all received
segments are added using 1’s complement arithmetic to get the sum. The sum is
complemented. If the result is zero, the received data is accepted; otherwise discarded,
Fig 3.2.1: (a) Sender’s end for the calculation of the checksum, (b) Receiving end for
checking the checksum
This Cyclic Redundancy Check is the most powerful and easy to implement
technique. Unlike checksum scheme, which is based on addition, CRC is based on binary
division. In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are
appended to the end of data unit so that the resulting data unit becomes exactly divisible
by a second, predetermined binary number. At the destination, the incoming data unit is
divided by the same number. If at this step there is no remainder, the data unit is assumed
to be correct and is therefore accepted.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
remainder of this division process generates the r-bit FCS. On receiving the packet, the
receiver divides the (k+r) bit frame by the same predetermined number and if it produces
no remainder, it can be assumed that no error has occurred during the transmission.
Operations at both the sender and receiver end are shown in Fig.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
The transmitter can generate the CRC by using a feedback shift register circuit.
The same circuit can also be used at the receiving end to check whether any error has
occurred. All the values can be expressed as polynomials of a dummy variable X. For
example, for P = 11001 the corresponding polynomial is X4+X3+1. A polynomial is
selected to have at least the following properties:
The first condition guarantees that all burst errors of a length equal to the degree of
polynomial are detected. The second condition guarantees that all burst errors affecting an
odd number of bits are detected. CRC process can be expressed as X^nM(X)/P(X) = Q(X)
+ R(X) / P(X)
When a particle from a cosmic ray hits the basic memory cell, it generates a radial
distribution of electron–hole pairs along the transport track. These generated electron–
hole pairs can cause soft errors by changing the values stored in the memory cell leading
to data corruption and system failure. For the transistors with a large feature size, a
radiation event just affects one memory cell, which means that only the SEU occurs. In
this case, the use of single error-correction (SEC) - double error-detection (DED) codes is
enough to protect the memory from radiation effects. As the feature size enters into DSM
range, the critical charge keeps decreasing and the area of the memory cell scales down
for each successive technology node. This makes more memory cells affected by a
particle hit as shown in Fig. 2. For the CMOS bulk technology, with the cell-to-cell
spacing decreasing, the electron–hole pairs generated in the substrate can diffuse to
nearby cells and induce MBUs.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
FIG 3.3.3: Schematic description of memory cells included in the radiation effect with
variation of the technology node.
This set of 2k code words is called a block code. For a block code to be useful
there should be a one-to-one correspondence between a message u and its code word v. A
desirable structure for a block code to possess is the linearity. With this structure, the
encoding complexity will be greatly reduced.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
1. An (n, k) BLBC can be specified by any set of k linear independent code words c 0,c
1,. . . , c k − 1. If we arrange the k code words into a k × n matrix G, G is called a
generator matrix for C.
3. The generator matrix G′ of a systematic code has the form of [Ik A], where Ik is the k ×
k identity matrix. 4. G′ can be obtained by permuting the columns of G and by doing
some row operations on G. We say that the code generated by G′ is an equivalent code of
the generated by G.
Therefore, if one error can be corrected or detected, it obeys the following rules.
We discuss the code design technique for 3-bit BEC with QAEC. The approach
used is based on syndrome decoding and the analysis of the requirements in terms of
parity check bits and the formulation of the problem is similar to the one used in some
previous studies, the process used to design QAEC codes can be divided into error space
satisfiability problem and unique syndrome satisfiability.
For a code with k data bits and c check bits, its input can represent 2k binary values. If
one error occurs, it has k +c bit positions for a single error with output space of (k + c) ·
2k values. If adjacent errors occur, it has k + c − 1 bit positions for double adjacent errors
with output space of (k +c −1)· 2k values, (k + c − 2) bit positions for triple adjacent
errors with output space of (k + c − 2) · 2k values,...,(k + c − (N − 1)) bit positions for N
adjacent errors with output space of (k + c −(N −1))· 2k values. If almost adjacent errors
occur, it has (k +c−2) bit positions for errors in 3-bit window with output space of (k + c
− 2) · 2k values, (k + c − 3) bit positions for each type of errors in 4-bit window with
output space of (k + c − 3) · 2k values. To obtain the codes that can correct the errors with
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
certain fault types, the sum of the output space value of error patterns should be less than
or equal to the whole output space value 2k+c.
For the proposed code, to correct 3-bit burst and quadruple adjacent errors, the
total condition of the error patterns is (k + c − 3) + (k + c − 2) + (k + c − 1) + (k + c) + (k
+ c − 2), respectively, for quadruple adjacent errors, triple adjacent errors, double
adjacent errors, single errors, and 3-bit almost adjacent errors.
From the view of binary block linear codes, if a type of error patterns can be
corrected, the syndrome of individual error patterns should be unique. For the proposed
code, the error patterns are (. . . , 1, . . .) for SEC, (. . . , 11, . . .) for DAEC, (. . . , 111, . . .)
for TAEC, (. . . , 1111, . . .) for QAEC, and (. . . , 101, . . .) for 3-bit almost adjacent errors
correction. Therefore, the unique syndrome satisfiability can be expressed by
S0i = S0 j,
S1i = S1 j,
S2i = S2 j,
S3i = S3 j,
S4i = S4 j,
Where S0i is the syndrome for single bit error, S1i is the syndrome for double
adjacent bit errors, S2i is the syndrome for triple adjacent bit errors, S3i is the syndrome
for 3-bit almost adjacent bit errors, and S4i is the syndrome for quadruple adjacent bit
errors. The syndrome variables S0i, S1i, S2i, S3i, and S4i are linear combinations of the
H matrix columns obeying the rules
S0i = hi
S1i = hi ⊕ hi−1
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
In order to construct the expected codes, the first step is to ensure the number of
the check bits. From the aspect of low redundancy, the number of the check bits is set to
the values shown in Table I. The number of the check bits is seven for codes with 16 data
bits, eight for codes with 32 data bits, and nine for codes with 64 data bits.
The main idea of the algorithm is based on the recursive back tracing algorithm.
At first, an identity matrix with block size (n − k) is constructed as the initial matrix, and
the corresponding syndromes of the error patterns are added to the syndrome set. Then, a
column vector selected from Fig.
The 2n−k − 1 column candidates is added to the right side of the initial matrix.
This process is defined as column added action. Meanwhile, the new syndromes which
belong to the new added column are calculated. If none of new syndromes is equal to the
elements in a syndrome set, the column-added action is successful and the corresponding
new syndromes are added into the syndrome set. The base-matrix is updated to the
previous matrix with the added column. Otherwise, the column-added action fails and
another new column from the candidates is selected. If all the candidates are tried and the
column-added action still fails, one column from the right side of previous matrix and the
corresponding syndromes are
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
Reduced from the base-matrix and the syndrome set, respectively. Then, the
algorithm continues the column added action until the matrix dimension reaches the
expected value.
For both optimization criteria mentioned before, the column used to construct the
matrix should have as low weight as possible. In this case, the weight of the used column
is restricted so as to minimize the weight of the matrix and speed up the process of
finding better solutions. The method of column weight restriction for 3-bit BEC codes is
shown in Fig.,
Where, Ai is the available minimum weight of the column. Through the control of
the weight of each column, the computing operation can quickly cover the whole cycle
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
list by decreasing the number of available candidates of the columns added to the base-
matrix. Although this exhausted control can quickly narrow the searching scope, it is
unwise to use it for all the bits as it can lead to missing better solutions due to harsh
restriction. With the proposed column weight restriction method, the searching algorithm
with more optimization criteria for the columns can find the solutions more effectively.
The performance of the new finding process is remarkably effective for codes with small
block size. When the number of data bits reaches 32 or 64, the computing time is still
huge amount. Here, a function of recording the past procedure is also developed with the
column weight restriction method to shorten the time cost of searching the target codes.
The detailed algorithm flow is shown in Fig,
Fig 3.3.6:Flow of algorithm with column weight restriction and past procedure
record.
At the initialization step, all the column weight restrictions are set to (n − k).
Then, A0 is set to 2, which means that the number of 1 in the corresponding column is 2.
The searching algorithm with column weight restriction starts to find the solution. If the
solution exists, record and update the status of the bit restricted and shift to the next bit
column weight setting. Otherwise, releases the column weight restriction to Ai = Ai + 1.
With the program constraints increasing, the target code is close to appear.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
CHAPTER4
PROPOSED SYSTEM
In terms of computing time, it is possible for QAEC codes with 16 data bits to
have access to all the solutions, but it is impossible for QAEC codes with 32 and 64 data
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
bits. Therefore, in this paper, the best solutions are presented for QAEC codes with 16
data bits and the best solutions found in a reasonable time (one week) using the proposed
searching algorithm are presented for QAEC codes with 32 and 64 data bits. In terms of
the two optimization criteria mentioned in previous, for codes (23, 16), the two best parity
check matrices with both criteria best optimized. The parity check matrix for (40, 32)
optimized to reduce the total number of one’s is shown in Fig. 8 and the parity check
matrices for (40, 32) optimized to reduce the maximum number of ones in a row, for
codes (73, 64), the solutions obtained with both criteria better optimized are the same.
Fig 4.4.1:Best parity check matrix for the (23, 16) 3-bit burst with QAEC optimized to
reduce the total number of ones and the maximum number of ones in a row.
Fig 4.4.2:Best parity check matrix for the (23, 16) 3-bit burst with QAEC optimized to
reduce the total number of ones and the maximum number of ones in a row.
Fig 4.4.3:Parity check matrix for the (40, 32) 3-bit burst with QAEC optimized to reduce
the total number of ones.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
Fig 4.4.4:Parity check matrix for the (40, 32) 3-bit burst with QAEC optimized to reduce
the maximum number of ones in a row.
Fig 4.4.5:Parity check matrix for the (73, 64) 3-bit burst with QAEC optimized to reduce
the total number of ones and the maximum number of ones in a row.
When the particles hit the memory resulting in MBUs, the contents of affected
memory cells are flipped. Here, to elaborate on the correction ability of QAEC codes,
quadruple adjacent bits are flipped on D2, D3, D4, and D5. In the decoding process, the
syndrome is calculated using the stored check bits and data bits and the structure of the
parity check matrix. Through the corresponding relationship between the syndrome and
the XOR result of the columns mentioned in Section II, the flipped bits can be located.
With the flipped bits inverted, the errors from the storage stage in the memory are
effectively corrected. This is the whole procedure of encoding and decoding for the
proposed QAEC codes.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
CHAPTER 5
HARDWARE REQUIREMENTS:
5.1 GENERAL
Integrated circuit (IC) technology is the enabling technology for a whole host of
innovative devices and systems that have changed the way we live. Jack Kilby and Robert
Noyce received the 2000 Nobel Prize in Physics for their invention of the integrated
circuit; without the integrated circuit, neither transistors nor computers would be as
important as they are today. VLSI systems are much smaller and consume less power
than the discrete components used to build electronic systems before the 1960s.
Integration allows us to build systems with many more transistors, allowing much
more computing power to be applied to solving a problem. Integrated circuits are also
much easier to design and manufacture and are more reliable than discrete systems; that
makes it possible to develop special-purpose systems that are more efficient than general-
purpose computers for the task at hand.
Electronic systems now perform a wide variety of tasks in daily life. Electronic
systems in some cases have replaced mechanisms that operated mechanically,
hydraulically, or by other means; electr
onics are usually smaller, more flexible, and easier to service. In other cases, electronic
systems have created totally new applications. Electronic systems perform a variety of
tasks, some of them visible, some more hidden:
Personal entertainment systems such as portable MP3 players and DVD players perform
sophisticated algorithms with remarkably little energy.
Electronic systems in cars operate stereo systems and displays; they also control fuel
injection systems, adjust suspensions to varying terrain, and perform the control functions
required for anti-lock braking (ABS) systems.
Digital electronics compress and decompress video, even at high definition data rates, on-
the-fly in consumer electronics.
Low-cost terminals for Web browsing still require sophisticated electronics, despite their
dedicated function.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
dimension transistors appear. The ionization range of ions with the order of magnitude in
micrometer can include morememorycells in the word-linedirectionas shown in Fig. 2
than the three bits previously considered [29]. This means that the SEC-DAEC-TAEC
codes may not be effective to ensure the memory reliability. Codes with more advanced
correction ability are demanded. For example, codes designed with low redundancyfor
SEC-DAEC-TAEC and 3-bit BEC are presented in [23]. Therefore, extending the
correction ability to quadrupleadjacent bit errors would be of interest, especially if it can
be done without adding extra parity bits. In this paper, we present an improvement of 3-
bit BEC codes to also provide quadruple adjacent error correction (QAEC). The code
design technique for the QAEC with
low redundancy is specified from two aspects: 1) error space satisfiability; and 2)
uniquesyndromesatisfiability. Codes with QAEC for 16, 32, and 64 data bits are
presented. From the view of the integrated circuits design, two criteria have been used to
optimize the decoder complexity and decoder delay at the ECCs level: 1) minimizing the
total number of ones in the parity check matrix and 2) minimizing the number of ones in
the heaviest row of the parity check matrix. Additionally, based on the traditional
recursive backtracing algorithm, an algorithm with the function of weight restriction and
recording the past procedure is developed. The new algorithm not only reduces the cost of
program run time, but also remarkably improves the performance of previous 3-bit BEC
codes. The encoders and decoders for the QAEC codes are implementedin Verilog
hardware description language(HDL). Area overhead and delay overhead are obtained by
using a TSMC bulk 65-nm library. Compared with the previous 3-bit BEC codes, the area
and delay overheadis moderateto achieve the correction ability extension.e more complex,
we build not a few general-purpose computers but an ever-wider range of special-purpose
systems. Our ability to do so is a testament to our growing mastery of both integrated
circuit manufacturing and design, but the increasing demands of customers continue to
test the limits of design and manufacturing.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
Size: Integrated circuits are much smaller both transistors and wires are shrunk to
micrometer sizes, compared to the millimeter or centimeter scales of discrete
components. Small size leads to advantages in speed and power consumption, since
smaller components have smaller parasitic resistances, capacitances, and inductances.
Speed: Signals can be switched between logic 0 and logic 1 much quicker within a chip
than they can between chips. Communication within a chip can occur hundreds of times
faster than communication between chips on a printed circuit board.
The high speed of circuits on-chip is due to their small size smaller components and wires
have smaller parasitic capacitances to slow down the signal.
Power consumption: Logic operations within a chip also take much less power. Once
again, lower power consumption is largely due to the small size of circuits on the chip
smaller parasitic capacitances and resistances require less power to drive them.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
5.6TECHNOLOGY
Most manufacturing processes are fairly tightly coupled to the item they are
manufacturing. An assembly line built to produce Buicks, for example, would have to
undergo moderate reorganization to build Chevys tools like sheet metal molds would
have to be replaced, and even some machines would have to be modified. And either
assembly line would be far removed from what is required to produce electric drills.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
Creating layouts is very time-consuming and very important the size of the layout
determines the cost to manufacture the circuit, and the shapes of elements in the layout
determine the speed of the circuit as well. During manufacturing, a photolithographic
(photographic printing) process is used to transfer the layout patterns from the masks to
the wafer. The patterns left by the mask are used to selectively change the wafer:
impurities are added at selected locations in the wafer; insulating and conducting
materials are added on top of the wafer as well.
These fabrication steps require high temperatures, small amounts of highly toxic
chemicals, and extremely clean environments. At the end of processing, the wafer is
divided into a number of chips.
5.10ECONOMICS
Because integrated circuit manufacturing has so much leveraged a great number of
parts can be built with a few standard manufacturing procedures a great deal of effort has
gone into improving IC manufacturing. However, as chips become more complex, the
cost of designing a chip goes up and becomes a major part of the overall cost of the chip.
Moore’s Law
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
In the 1960s Gordon Moore predicted that the number of transistors that could be
manufactured on a chip would grow exponentially. His prediction, now known as
Moore’s Law, was remarkably prescient. Moore’s ultimate prediction was that transistor
count would double every two years, an estimate that has held up remarkably well.
Today, an industry group maintains the International Technology Roadmap for
Semiconductors (ITRS), that maps out strategies to maintain the pace of Moore’s Law.
Terminology
The most basic parameter associated with a manufacturing process is the minimum
channel length of a transistor. (In this book, for example, we will use as an example a
technology that can manufacture 180 nm transistors.) A manufacturing technology at a
particular channel length is called a technology node. We often refer to a family of
technologies at similar feature sizes: micron, submicron, deep submicron, and now
nanometer technologies. The term nanometer technology is generally used for
technologies below 100 nm.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
One of the less fortunate consequences of Moore’s Law is that the time and
money required to design a chip goes up steadily. The cost of designing a chip
comes from several factors:
Skilled designers are required to specify, architect, and implement the chip. A
design team may range from a half-dozen people for a very small chip to 500
people for a large, high-performance microprocessor.
These designers cannot work without access to a wide range of computer- aided
design (CAD) tools. These tools synthesize logic, create layouts, simulate, and
verify designs. CAD tools are generally licensed and you must pay a yearly fee to
maintain the license. A license for a single copy of one tool, such as logic
synthesis, may cost as much as $50,000 US.
The CAD tools require a large compute farm on which to run. During the most
intensive part of the design process, the design team will keep dozens of
computers running continuously for weeks or months.
A large ASIC, which contains millions of transistors but is not fabricated on the
state-of-the-art process, can easily cost $20 million US and as much as $100
million. Designing a large microprocessor costs hundreds of millions of dollars.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
In the 1960s, standard parts were logic gates; in the 1970s they were LSI
components. Today, standard parts include fairly specialized components: communication
network interfaces, graphics accelerators, floating point processors. All these parts are
more specialized than microprocessors but are used in enough volume that designing
special-purpose chips is worth the effort.
In fact, putting a complex, high-performance function on a single chip often
makes other applications possible for example, single-chip floating point processors make
high-speed numeric computation available on even inexpensive personal computers.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
Chip designs are simulated to ensure that the chip’s circuits compute the proper
functions to a sequence of inputs chosen to exercise the chip. Manufacturing test but each
chip that comes off the manufacturing line must also undergo
Manufacturing test:
The chip must be exercised to demonstrate that no manufacturing defects rendered
the chip useless. Because IC manufacturing tends to introduce certain types of defects and
because we want to minimize the time required to test each chip, we can’t just use the
input sequences created for design verification to perform manufacturing test. Each chip
must be designed to be fully and easily testable. Finding out that a chip is bad only after
you have plugged it into a system is annoying at best and dangerous at worst. Customers
are unlikely to keep using manufacturers who regularly supply bad chips.
Defects introduced during manufacturing range from the catastrophic
contamination that destroys every transistor on the wafer to the subtle a single broken
wire or a crystalline defect that kills only one transistor. While some bad chips can be
found very easily, each chip must be thoroughly tested to find even subtle flaws that
produce erroneous results only occasionally. Tests designed to exercise functionality and
expose design bugs don’t always uncover manufacturing defects. We use fault models to
identify potential manufacturing problems and determine how they affect the chip’s
operation.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
The most common fault model is stuck-at-0/1: the defect causes a logic gate’s
output to be always 0 (or 1), independent of the gate’s input values. We can often
determine whether a logic gate’s output is stuck even if we can’t directly observe its
outputs or control its inputs. We can generate a good set of manufacturing tests for the
chip by assuming each logic gate’s output is stuck at 0 (then 1) and finding an input to the
chip which causes different outputs when the fault is present or absent.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
Because a basic SRAM is not clocked, the lookup table logic element operates
much as any other logic gate as its input’s changes, its output changes after some delay.
Each address in the SRAM represents a combination of inputs to the logic element. The
value stored at that address represents the value of the function for that input Combination
an input function requires an SRAM with location.
A typical logic element has four inputs. The delay through the lookup table is
independent of the bits stored in the SRAM, so the delay through the logic element is the
same for all functions. This means that, for example, a lookup table-based logic element
will exhibit the same delay for a 4-input XOR and a 4-input NAND.
In contrast, a 4-input XOR built with static CMOS logic is considerably slower
than a 4-input NAND. Of course, the static logic gate is generally faster than the logic
element. Logic elements generally contain registers flip-flops and latches as well as
combinational logic.
A flip-flop or latch is small compared to the combinational logic element (in sharp
contrast to the situation in custom VLSI), so it makes sense to add it to the combinational
logic element. Using a separate cell for the memory element would simply take up
routing resources. The memory element is connected to the output; whether it stores a
given value is controlled by its clock and enable inputs.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
The wiring channels that connect to the logic elements’ inputs and outputs also
need to be programmable. A wiring channel has a number of programmable connections
such that each input or output generally can be connected to any one of several different
wires in the channel.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
CHAPTER-6
TOOLS
6.1 Introduction:
The main tools required for this project can be classified into two broad categories.
Hardware requirement
Software requirement
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
Standards Supported:
ModelSim VHDL supports both the IEEE 1076-1987 and 1076-1993 VHDL, the
1164-1993 Standard Multi value Logic System for VHDL Interoperability, and the
1076.2-1996 Standard VHDL Mathematical Packages standards. Any design developed
with ModelSim will be compatible with any other VHDL system that is compliant with
either IEEE Standard 1076-1987 or 1076-1993. ModelSim Verilog is based on IEEE Std
1364-1995 and a partial implementation of 1364-2001, Standard Hardware Description
Language Based on the Verilog Hardware Description Language. The Open Verilog
International Verilog LRM version 2.0 is also applicable to a large extent. Both PLI
(Programming Language Interface) and VCD (Value Change Dump) are supported for
ModelSim PE and SE users.
6.4 MODELSIM:
Basic Steps for Simulation
This section provides further detail related to each step in the process of
simulating your design using ModelSim.
design files (VHDL, Verilog, and/or SystemC), including stimulus for the design
libraries, both working and resource
modelsim.ini (automatically created by the library mapping command)
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
A library is a location where data to be used for simulation is stored. Libraries are
ModelSim’s way of managing the creation of data before it is needed for use in
simulation. It also serves as a way to streamline simulation invocation. Instead of
compiling all design data each and every time you simulate, ModelSim uses binary pre-
compiled data from these libraries. So, if you make changes to a single Verilog module,
only that module is recompiled, rather than all modules in the design.
Design libraries can be used in two ways: 1) as a local working library that
contains the compiled version of your design; 2) as a resource library. The contents of
your working library will change as you update your design and recompile. A resource
library is typically unchanging, and serves as a parts source for your design. Examples of
resource libraries might be: shared information within your group, vendor libraries,
packages, or previously compiled elements of your own working design.
You can create your own resource libraries, or they may be supplied by another
design team or a third party (e.g., a silicon vendor). For more information on resource
libraries and working libraries, see "Working library versus resource libraries",
"Managing library contents", "Working with design libraries, and "Specifying the
resource libraries".
Creating the Logical Library – vlib
Before you can compile your source files, you must create a library in which to
store the compilation results. You can create the logical library using the GUI, using File
> New > Library (see "Creating a library"), or you can use the vlib command. For
example, the command:
vlib work creates a library named work. By default, compilation results are stored in the
work library
Mapping the Logical Work to The Physical Work Directory – vmap
VHDL uses logical library names that can be mapped to ModelSim
library directories. If libraries are not mapped properly, and you invoke your
simulation, necessary components will not be loaded and simulation will fail.
Similarly, compilation can also depend on proper library mapping.
[Type text]
Rom Based Quadruple Error Correction for 4-bit Adjacent Errors
ModelSim’s compiler for VHDL design units is vcom. VHDL files must be
compiled according to the design requirements of the design. Projects may assist you
in determining the compile order: for more information, see “Auto-generating
compile order" (UM-46). See "Compiling VHDL files" (UM-73) for details. on
VHDL compilation.
Compiling SystemC – sccom
ModelSim’s compiler for SystemC design units is sccom, and is used only if
you have SystemC components in your design. See "Compiling SystemC files" for
details.
Step 3 - Loading the design for simulation
vsim<top>
Your design is ready for simulation after it has been compiled and (optionally)
optimized with vopt. For more information on optimization, see Optimizing Verilog
designs. You may then invoke vsim with the names of the top-level modules (many
designs contain only one top-level module) or the name you assigned to the optimized
log
checkpoint
restore
show
You can create your own resource libraries, or they may be supplied by
another design team or a third party (e.g., a silicon vendor). For more information on
resource libraries and working libraries, see "Working library versus resource
libraries", "Managing library contents", "Working with design libraries, and
"Specifying the resource library.
define stimuli, to interact with the user, and to compare results with those expected.
FPGA stands for Field Programmable Gate Array which has the array of logic
module, I /O module and routing tracks (programmable interconnect). FPGA can be
configured by end user to implement specific circuitry. Speed is up to 100 MHz but at
present speed is in GHz.
Main applications are DSP, FPGA based computers, logic emulation, ASIC
and ASSP. FPGA can be programmed mainly on SRAM (Static Random Access
Memory). It is Volatile and main advantage of using SRAM programming technology
is re-configurability. Issues in FPGA technology are complexity of logic element,
clock support, IO support and interconnections (Routing).
To simulate, first the entity design has to be loaded into the simulator. Do this
by selecting from the menu:
Simulate > Simulate
A new window will appear listing all the entities (not filenames) that are in the
work library. Select FA entity for simulation and click OK.
order in which they are declared in the code. To display the waveform, select the
signals for the waveform to display (hold CTL and click to select multiple signals)
and from the signal list window menu select:
Add > Wave > Selected signals
Debug results
Automatically.
Projects are persistent. In other words, they will open every time you invoke
Modelsim unless you specifically close them.
The following diagram shows the basic steps for simulating a design within a
Modelsim project.
In this case, it is possible to use Verilog to write a test bench to verify the
functionality of the design using files on the host computer to define stimuli, to
interact with the user, and to compare results with those expected.
A Verilog model is translated into the "gates and wires" that are mapped onto a
programmable logic device such as a CPLD or FPGA, and then it is the actual
hardware being configured, rather than the Verilog code being "executed" as if on
some form of a processor chip.
6.6.1 Implementation:
Synthesis (XST)
– Produce a netlist file starting from an HDL description
Translate (NGDBuild)
– Converts all input design netlists and then writes the results into a single
merged file,
that describes logic and constraints.
Mapping (MAP)
– Maps the logic on device components.
– Takes a netlist and groups the logical elements into CLBs and IOBs
(components of FPGA).
Place and Route (PAR)
– Place FPGA cells and connects cells.
Bit stream generation
XILINX Design Process
Step 1: Design entry
– HDL (Verilog or VHDL, ABEL x CPLD), Schematic Drawings, Bubble
Diagram
Step 2: Synthesis
– Translates.v, .vhd, .sch files into a netilist file (.ngc)
Step 3: Implementation
– FPGA: Translate/Map/Place & Route, CPLD: Fitter
Step 4: Configuration/Programming
– Download a BIT file into the FPGA
– Program JEDEC file into CPLD
Custom ICs are expensive and takes long time to design so they are useful when
produced in bulk amounts. But FPGAs are easy to implement within a short time with
the help of Computer Aided Designing (CAD) tools (because there is no physical
layout process, no mask making, and no IC manufacturing).
Some disadvantages of FPGAs are, they are slow compared to custom ICs as
they can’t handle vary complex designs and also, they draw more power. Xilinx logic
block consists of one Look Up Table (LUT) and one Flip Flop.
An LUT is used to implement number of different functionalities. The input lines to
the logic block go into the LUT and enable it. The output of the LUT gives the result
of the logic function that it implements and the output of logic block is registered or
unregistered output from the LUT.SRAM is used to implement a LUT.A k-input logic
function is implemented using 2^k * 1 size SRAM. Number of different possible
functions for k input LUT is 2^2^k.
Advantage of such architecture is that it supports implementation of so many
logic functions, however the disadvantage is unusually large number of memory cells
required to implement such a logic block in case number of inputs is large.
A B AND OR ..... ...... ......
0 0 0 0
0 1 0 1
1 0 0 1
1 1 1 1
6.8 Interconnects:
A wire segment can be described as two end points of an interconnect
with no programmable switch between them. A sequence of one or more wire
segments in an FPGA can be termed as a track.
Typically, an FPGA has logic blocks, interconnects and switch blocks (Input/output
blocks). Switch blocks lie in the periphery of logic blocks and interconnect. Wire
segments are connected to logic blocks through switch blocks. Depending on the
required design, one logic block is connected to another and so on.
HDLs represent a level of abstraction that can isolate the designers from the
details of the hardware implementation. Schematic based entry gives designers much
more visibility into the hardware. It is the better choice for those who are hardware
oriented. Another method but rarely used is state-machines.
It is the better choice for the designers who think the design as a series of
states. But the tools for state machine entry are limited. In this documentation we are
going to deal with the HDL based design entry.
6.11 Synthesis:
The process which translates VHDL or Verilog code into a device netlist
format. i.e. a complete circuit with logical elements (gates, flip flops, etc…) for the
design. If the design contains more than one sub designs, ex. to implement a
processor, we need a CPU as one design element and RAM as another and so on, then
the synthesis process generates netlist for each design element Synthesis process will
check code syntax and analyze the hierarchy of the design which ensures that the
design is optimized for the design architecture, the designer has selected. The
resulting netlist(s) is saved to an NGC (Native Generic Circuit) file (for Xilinx®
Synthesis Technology (XST)).
Process combines all the input netlists and constraints to a logic design file.
This information is saved as a NGD (Native Generic Database) file. This can be done
using NGD Build program. Here, defining constraints is nothing but, assigning the
ports in the design to the physical elements (ex. pins, switches, buttons etc.) of the
targeted device and specifying time requirements of the design. This information is
stored in a file named UCF (User Constraints File). Tools used to create or modify the
UCF are PACE, Constraint Editor etc.
6.12.2 Map:
Process divides the whole circuit with logical elements into sub blocks such
that they can be fit into the FPGA logic blocks. That means map process fits the logic
defined by the NGD file into the targeted FPGA elements (Combinational Logic
Blocks (CLB), Input Output Blocks (IOB)) and generates an NCD (Native Circuit
Description) file which physically represents the design mapped to the components of
FPGA.
MAP program is used for this purpose.
PAR program is used for this process. The place and route process places the
sub blocks from the map process into logic blocks according to the constraints and
connects the logic blocks. Ex. if a sub block is placed in a logic block which is very
near to IO pin, then it may save the time but it may affect some other constraint.
So, tradeoff between all the constraints is taken account by the place and route
process
The PAR tool takes the mapped NCD file as input and produces a completely
routed NCD file as output. Output NCD file consists the routing information.
design is not yet synthesized to gate level, timing and resource usage properties are
still unknown.
6.13.3 Functional simulation (Post Translate Simulation):
Functional simulation gives information about the logic operation of the
circuit. Designer can verify the functionality of the design using this process after the
Translate process. If the functionality is not as expected, then the designer has to
made changes in the code and again follow the design flow steps.
CHAPTER-7
RESULTS
7.1 SIMULATION RESULT:
SYNTHESIS RESULTS:
The developed project is simulated and verified their functionality. Once the
functional verification is done, the RTL model is taken to the synthesis process using
the Xilinx ISE tool. In synthesis process, the RTL model will be converted to the gate
level netlist mapped to a specific technology library. Here in this Spartan 3E family,
many different devices were available in the Xilinx ISE tool. In order to synthesis this
design the device named as “XC3S500E” has been chosen and the package as
“FG320” with the device speed such as “-4”.
This design is synthesized and its results were analyzed as follows
ENCODER RTL SCHEMATIC:
TECHNOLOGY SCHEMATIC:
DESIGN SUMMARY:
TIMING REPORT:
CHAPTER-8
CONCLUSION
REFERENCES
[3] S.-Y. Wu, C. Y. Lin, S. H. Yang, J. J. Liaw, and J. Y. Cheng, “Advancing foundry
technology with scaling and innovations,” in Proc. Int. Symp. VLSI-TSA, Apr. 2014,
pp. 1–3.
[5] Y. Bentoutou, “A real time EDAC system for applications onboard earth
observation small satellites,” IEEE Trans. Aerosp. Electron. Syst., vol. 48, no. 1, pp.
648–657, Jan. 2012.
[6] E. Ibe, H. Taniguchi, Y. Yahagi, K.-I. Shimbo, and T. Toba, “Impact of scaling on
neutron-induced soft error in SRAMs from a 250 nm to a 22 nm design rule,” IEEE
Trans. Electron Devices, vol. 57, no. 7, pp. 1527–1538, Jul. 2010.
[8] M. Murat, A. Akkerman, and J. Barak, “Electron and ion tracks in silicon: Spatial
and temporal evolution,” IEEE Trans. Nucl. Sci., vol. 55, no. 6, pp. 3046–3054, Dec.
200
[12] O. A. Amusan et al., “Charge collection and charge sharing in a 130 nm CMOS
technology,” IEEE Trans. Nucl. Sci., vol. 53, no. 6, pp. 3253–3258, Dec. 2006.
[13] J. D. Black et al., “Characterizing SRAM single event upset in terms of single
and multiple node charge collection,” IEEE Trans. Nucl. Sci., vol. 55, no. 6, pp.
2943–2947, Dec. 2008.
[15] W. Calienes, R. Reis, C. Anghel, and A. Vladimirescu, “Bulk and FDSOI SRAM
resiliency to radiation effects,” in Proc. IEEE 57th Int. Midwest Symp Circuits Syst.,
Aug. 2014, pp. 655–658.
[19] A. Dutta and N. A. Touba, “Multiple bit upset tolerant memory using a selective
cycle avoidance based SEC-DED-DAEC code,” in Proc. IEEE VLSI Test Symp.,
May 2007, pp. 349–354.
[20] A. Neale and M. Sachdev, “A new SEC-DED error correction code subclass for
adjacent MBU tolerance in embedded memory,” IEEE Trans. Device Mater. Rel., vol.
13, no. 1, pp. 223–230, Mar. 2013.
[21] Z. Ming, X. L. Yi, and L. H. Wei, “New SEC-DED-DAEC codes for multiple bit
upsets mitigation in memory,” in Proc. IEEE/IFIP 20th Int. Conf. VLSI Syst.-Chip,
Oct. 2011, pp. 254–259.
[24] L. Xiao, J. Li, J. Li, and J. Guo, “Hardened design based on advanced orthogonal
Latin code against two adjacent multiple bit upsets (MBUs) in memories,” in Proc.
16th Int. Symp. Quality Electron. Design, Mar. 2015, pp. 485–489.
[25] R. Naseer and J. Draper, “Parallel double error correcting code design to mitigate
multi-bit upsets in SRAMs,” in Proc. IEEE 34th Eur. SolidState Circuits, Sep. 2008,
pp. 222–225.
[26] M. A. Bajura et al., “Models and algorithmic limits for an ECC-based approach
to hardening sub-100-nm SRAMs,” IEEE Trans. Nucl. Sci., vol. 54, no. 4, pp. 935–
945, Aug. 2007.
[32] J.-L. Autran, D. Munteanu, G. Gasiot, P. Roche, S. Serre, and S. Semikh, Soft-
Error Rate of Advanced SRAM Memories: Modeling and Monte Carlo Simulation.
Rijeka, Croatia: INTECH, 2012.