The History of The Integrated Circuit

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 38

The History of the Integrated Circuit

The integrated circuit, sometimes called a ASIC, IC, or just a chip, is a series of transistors placed on
a small, flat piece that is usually made of silicon. The IC is really a platform for small transistors that a
small chip which can operate faster than old-fashioned large transistors which were used in previous
generations. They are also far more durable and significantly cheaper to produce which allowed them
to become part of many different electronic devices.

The advent of the integrated circuit revolutionized the electronics industry and paved the way for
devices such as mobile phones, computers, CD players, televisions, and many appliances found
around the home. In addition, the spread of the chips helped to bring advanced electronic devices to
all parts of the world.

Early History of the Integrated Circuit

The beginnings of the IC really started with the inherent limitations of the vacuum tube, a large, bulky
device that preceded the transistor which eventually led to the microchip. Vacuum tubes worked as an
electronic circuit, but they required warming up before they could operate. Plus, they were quite
vulnerable to being damaged or destroyed even by minor bumps or impacts.

With the limitations in mind, German engineer Werner Jacobi filed a patent in 1949 for a
semiconductor that operated similarly to the current integrated circuit. Jacobi lined up five transistors
and used them in a three-stage arrangement on an amplifier. The result as Jacobi recognized was the
ability to shrink devices such as hearing aids and make them cheaper to produce.

Despite Jacobi’s invention, there appeared to be no immediate interest. Three years later, Geoffrey
Dummer who worked for the Royal Radar Establishment as part of the Ministry of Defence in Britain
proposed the first fully conceived idea for the integrated circuit. However, despite giving lectures
about his ideas, he was never able to build one successfully. It was the failure to actually create an IC
on his own that led to the movement towards the chip overseas to America.

Invention of the IC

Fast forward to 1957 when the idea of creating small, ceramic wafers that each contained one
component was first proposed by Jack Kilby who worked for the US Army. His idea led to the
Micromodule Program which held quite a bit of promise. However, as the project to develop this idea
started to gain traction, Kilby was inspired to come up with another, even more advanced design that
became the IC that we know today.
Kilby’s prototype was primitive by today’s standards, but it worked and his idea really took hold when
he left the army and went to work for Texas Instruments. On September 12 , 1958, Kilby
th

demonstrated the first working IC and applied for a patent on February 6 , 1959. Kilby’s description of
th

the device being a work of an electronic circuit that was totally integrated led to the coining of the
term, integrated circuit.

Perhaps not surprisingly, the first customers for Kilby’s invention was the US Air Force. It was not long
before many common electronic devices were being designed with the IC in mind. For his part in
inventing the first true integrated circuit, Kilby won the Nobel Prize in 2000. Nine years later, his work
was labeled a milestone by the IEEE.

Development and Production

Although Kilby’s IC was revolutionary, it was not without problems. One of the most troubling was that
his IC or chip was fashioned out of germanium. About six months after Kilby’s IC was first patented,
Robert Noyce, who worked at Fairchild Semiconductor recognized the limitations of germanium and
creating his own chip fashioned from silicon.

At the same time, Jay Last, who led the development team at Fairchild Semiconductor, worked on
producing the first planar integrated circuit. Instead of a singular version, it would use transistors in
two pairs so they could operate separately. A groove was made between the transistors so they could
operate properly. Despite how revolutionary Last’s idea was and the success of the prototype, the
bosses at Fairchild either didn’t understand or recognize his work, so he was let go.

Fairchild went forward and created IC chips for use in the Apollo spacecraft which went to the moon.
It was this program along with using chips for satellites that spread the IC from military applications to
the commercial market. It also lowered the price of the IC drastically which made it perfect for use in
many electronic devices.

Noyce, who stayed at Fairchild, used an idea from Kurt Lehovec, who worked at Sprague Electric, to
create the p-n junction isolation. This was a valuable new concept for the IC since it allowed the
transistors placed inside to work independently of each other. This opened new possibilities for the
chip and it was not long before Fairchild Semiconductor developed self-aligned gates which is what all
CMOS computer chips use today.

The development of the self-aligned gates was first credited to Federico Faggin who came up with the
idea in 1968 and was recognized for his work in 2010 when he received a National Medal of
Technology and Innovation.

The 1960s were also dominated by many lawsuits between rival companies that had developed their
own version of the microchips as they were being improved for many different types of electronic
devices. However, it would be the computer that saw the greatest benefit. In the 1950s, computers
were massive devices that could barely hold a few megabytes. The incorporation of the integrated
chip combined with other innovations allowed computers to shrink considerably in size while gaining
in memory.

Today, the IC is still a vital part of many different types of electronic devices. It is recognized as one of
the most important inventions of the 20 century and has led to the elevation of Jack Kilby and Robert
th

Noyce to be considered the inventors of the integrated chip. While Kirby was the first, Noyce added
the right elements to make the IC work properly and provide it with the potential that it has
demonstrated over the decades.

All ics consist of both active and passive componentrs and the connections bnetween them are so
small that it may be impossible to see them even through a microscope. All the components(active
and passive) are intercponnected throuhh fabrication process.

In a circuit Diagram there is no common symbol for representing an IC. Theyt are mostly available as
dual in line packages metal cans and also ceramic flat packs They may be 8 pin, 10 pin, 14 pin
depending on the specifications of the manufacturers. However the no of cpomplete logic gates in a
single ic package may be described as
SSI
MSI
LSI
VLSI
HISTORY ON FABRICATION TECHNOLOGY
Early developments of the integrated circuit go back to 1949, when German engineer Werner
Jacobi[4] (Siemens AG)[5] filed a patent for an integrated-circuit-like semiconductor amplifying
device[6] showing five transistors on a common substrate in a 3-stage amplifier arrangement.
Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent. An
immediate commercial use of his patent has not been reported.
The idea of the integrated circuit was conceived by Geoffrey Dummer (1909–2002), a radar
scientist working for the Royal Radar Establishment of the British Ministry of Defence. Dummer
presented the idea to the public at the Symposium on Progress in Quality Electronic Components
in Washington, D.C. on 7 May 1952.[7] He gave many symposia publicly to propagate his ideas
and unsuccessfully attempted to build such a circuit in 1956.
A precursor idea to the IC was to create small ceramic squares (wafers), each containing a
single miniaturized component. Components could then be integrated and wired into a
bidimensional or tridimensional compact grid. This idea, which seemed very promising in 1957,
was proposed to the US Army by Jack Kilby and led to the short-lived Micromodule Program
(similar to 1951's Project Tinkertoy).[8][9][10] However, as the project was gaining momentum, Kilby
came up with a new, revolutionary design: the IC.

Jack Kilby's original integrated circuit

Newly employed by Texas Instruments, Kilby recorded his initial ideas concerning the integrated
circuit in July 1958, successfully demonstrating the first working integrated example on 12
September 1958.[11] In his patent application of 6 February 1959,[12] Kilby described his new
device as "a body of semiconductor material … wherein all the components of the electronic
circuit are completely integrated."[13] The first customer for the new invention was the US Air
Force.[14] Kilby won the 2000 Nobel Prize in Physics for his part in the invention of the integrated
circuit.[15]
Half a year after Kilby, Robert Noyce at Fairchild Semiconductor developed a new variety of
integrated circuit, more practical than Kilby's implementation. Noyce's design was made
of silicon, whereas Kilby's chip was made of germanium. The basis for Noyce's silicon IC was
the planar process, developed in early 1959 by Jean Hoerni, who was in turn building
on Mohamed Atalla's silicon surface passivation method developed in 1957.[16][17] Noyce also
credited Kurt Lehovec of Sprague Electric for the principle of p–n junction isolation, another key
concept behind the IC.[18] This isolation allows each transistor to operate independently despite
being part of the same piece of silicon.
Following the invention of the MOSFET (metal-oxide-silicon field-effect transistor), also known as
MOS, by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959,] the earliest experimental
MOS IC was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in
1962.[20] General Microelectronics later introduced the first commercial MOS integrated circuit in
1964,[21] a 120-transistor shift register developed by Robert Norman. The MOSFET has since
become the most critical device component in modern ICs.
The list of IEEE milestones includes the first integrated circuit by Kilby in 1958. Hoerni's planar
process and Noyce's silicon IC in 1959, and the MOSFET by Atalla and Kahng in 1959.[24]
Following the invention of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin,
Donald Klein and John Sarace at Bell Labs in 1967,[25] the first silicon-gate MOS IC technology
with self-aligned gates, the basis of all modern CMOS integrated circuits, was developed at
Fairchild Semiconductor. The technology was developed by Italian physicist Federico Faggin in
1968. In 1970, he joined Intel in order to develop the first single-chip central processing
unit (CPU) microprocessor, the Intel 4004, for which he received the National Medal of
Technology and Innovation in 2010. The 4004 was designed by Busicom's Masatoshi Shima and
Intel's Ted Hoff in 1969, but it was Faggin's improved design in 1970 that made it a reality.[26] In
the early 1970s, MOS integrated circuit technology allowed the integration of more than 10,000
transistors in a single chip.[27]
MICROELECTRONICS LAB DESIGN AND
REQUIREMENTS

The Ic fabrication requires a specially designed clean


envirommenmt called a clean room. As the device
dimension is very small, any tiny dust particle can cause
disruption inb the device processing so a clean room is a
must for microelectronics laboratory.
CCLEANROOM DEFINED
Cleanrooms Defined
It is a common misconception that cleanrooms clean things but this is not the case.
A cleanroom can be defined most simply as an enclosed environment with a low
level of pollutants like dust, airborne microorganisms, aerosol particles, and chemical
vapors. Cleanrooms can only be as clean as the people, products, packaging and
cleaning materials introduced into them. The facilities, people, tools, cleaning
chemicals, and the product being manufactured can all contribute to contamination.
All potential sources of contamination must be tightly controlled. A quick overview of
the different types of cleanrooms that medical device manufacturers may have in
their facilities may be helpful.
Cleanrooms are widely used in semiconductor manufacturing, biotechnology, the life
sciences—anywhere there are products or processes that are sensitive to
environmental contamination. The air entering a cleanroom is filtered to exclude
dust; once inside, it is recirculated constantly through high efficiency particulate air
(HEPA) and/or ultra-low penetration air (ULPA) filters to remove particulate
contaminants generated inside the cleanroom itself.
In a typical city, the outdoor atmosphere contains 35 million particles that are 0.5
microns in diameter or larger per cubic meter, which corresponds to an International
Organization for Standardization (ISO) Class 9 cleanroom. In contrast, an ISO Class
1 cleanroom allows no particles in that size range and only 12 particles that are 0.3
microns or smaller per cubic meter.
In order for a cleanroom to be certified by the U.S. Food and Drug Administration
(FDA), it must meet the standards for controlled environments set forth in FED-STD-
209E or ISO 14644-1. Although the US FED STD 209E was officially cancelled more
than a decade ago, the classification numbers are still widely used.

CLEANROOMS DESIGN
cleanrooms have complex mechanical systems and high construction, operating,
and energy costs, it is important to perform the cleanroom design in a methodical
way. This article will present a step-by-step method for evaluating and designing
cleanrooms, factoring in people/material flow, space cleanliness classification,
space pressurization, space supply airflow, space air exfiltration, space air balance,
variables to be evaluated, mechanical system selection, heating/cooling load
calculations, and support space requirements.

Step One: Evaluate Layout for


People/Material Flow

Figure 1

Many manufacturing processes need the very stringent environmental conditions


provided by a cleanroom. Because cleanrooms have complex mechanical systems
and high construction, operating, and energy costs, it is important to perform the
cleanroom design in a methodical way. This article will present a step-by-step
method for evaluating and designing cleanrooms, factoring in people/material
flow, space cleanliness classification, space pressurization, space supply airflow,
space air exfiltration, space air balance, variables to be evaluated, mechanical
system selection, heating/cooling load calculations, and support space
requirements.

Figure 4
It is important to evaluate the people and material flow within the cleanroom.
Cleanroom workers are a cleanroom’s largest contamination source and all critical
processes should be isolated from personnel access doors and pathways.

The most critical spaces should have a single access to prevent the space from
being a pathway to other, less critical spaces. Some pharmaceutical and
biopharmaceutical processes are susceptible to cross-contamination from other
pharmaceutical and biopharmaceutical processes. Process cross-contamination
needs to be carefully evaluated for raw material inflow routes and containment,
material process isolation, and finished product outflow routes and containment.
Figure 1 is an example of a bone cement facility that has both critical processes
(“Solvent Packaging”, “Bone Cement Packaging”) spaces with a single access and
airlocks as buffers to high personnel traffic areas (“Gown”, “Ungown”).

Step Two: Determine Space Cleanliness


Classification

Figure 2

To be able to select a cleanroom classification, it is important to know the primary


cleanroom classification standard and what the particulate performance
requirements are for each cleanliness classification. The Institute of Environmental
Science and Technology (IEST) Standard 14644-1 provides the different
cleanliness classifications (1, 10, 100, 1000, 10000, and 100000) and the allowable
number of particles at different particle sizes.
For example, a Class 100 cleanroom is allowed a maximum of 3,500 particles/cu ft
and 0.1 microns and larger, 100 particles/cu ft at 0.5 microns and larger, and 24
particles/cu ft at 1.0 microns and larger. Table 1 provides the allowable airborne
particle density per cleanliness classification table.

Space cleanliness classification has a substantial impact on a cleanroom’s


construction, maintenance, and energy cost. It is important to carefully evaluate
reject/contamination rates at different cleanliness classifications and regulatory
agency requirements, such as the Food and Drug Administration (FDA). Typically,
the more sensitive the process, the more stringent cleanliness classification should
be used. Table 2 provides cleanliness classifications for a variety of manufacturing
processes.
Your manufacturing process may need a more stringent cleanliness class
depending upon its unique requirements. Be careful when assigning cleanliness
classifications to each space; there should be no more than two orders of
magnitude difference in cleanliness classification between connecting spaces. For
example, it is not acceptable for a Class 100,000 cleanroom to open into a Class
100 cleanroom, but it is acceptable for a Class 100,000 cleanroom to open into a
Class 1,000 cleanroom.

Looking at our bone cement packaging facility (Figure 1), “Gown”, Ungown” and
“Final Packaging” are less critical spaces and have a Class 100,000 (ISO 8)
cleanliness classification, “Bone Cement Airlock” and “Sterile Airlock” open to
critical spaces and have Class 10,000 (ISO 7) cleanliness classification; ‘Bone
Cement Packaging” is a dusty critical process and has Class 10,000 (ISO 7)
cleanliness classification, and ‘Solvent Packaging” is a very critical process and is
performed in Class 100 (ISO 5) laminar flowhoods in a Class 1,000 (ISO 6)
cleanroom.

Step Three: Determine Space


Pressurization

Figure 3

Maintaining a positive air space pressure, in relation to adjoining dirtier cleanliness


classification spaces, is essential in preventing contaminants from infiltrating into a
cleanroom. It is very difficult to consistently maintain a space’s cleanliness
classification when it has neutral or negative space pressurization. What should
the space pressure differential be between spaces? Various studies evaluated
contaminant infiltration into a cleanroom vs. space pressure differential between
the cleanroom and adjoining uncontrolled environment. These studies found a
pressure differential of 0.03 to 0.05 in w.g. to be effective in reducing contaminant
infiltration. Space pressure differentials above 0.05 in. w.g. do not provide
substantially better contaminant infiltration control then 0.05 in. w.g.

Keep in mind, a higher space pressure differential has a higher energy cost and is
more difficult to control. Also, a higher pressure differential requires more force in
opening and closing doors. The recommended maximum pressure differential
across a door is 0.1 in. w.g. at 0.1 in. w.g., a 3 foot by 7 foot door requires 11
pounds of force to open and close. A cleanroom suite may need to be reconfigured
to keep the static pressure differential across doors within acceptable limits.

Our bone cement packaging facility is being built within an existing warehouse,
which has a neutral space pressure (0.0 in. w.g.). The air lock between the
warehouse and “Gown/Ungown” does not have a space cleanliness classification
and will not have a designated space pressurization. “Gown/Ungown” will have a
space pressurization of 0.03 in. w.g. “Bone Cement Air Lock” and “Sterile Air
Lock” will have a space pressurization of 0.06 in. w.g. “Final Packaging” will have
a space pressurization of 0.06 in. w.g. “Bone Cement Packaging” will have a space
pressurization of 0.03 in. w.g., and a lower space pressure than ‘Bone Cement Air
Lock” and “Final Packaging” in order to contain the dust generated during
packaging.

The air filtering into the ‘Bone Cement Packaging” is coming from a space with
the same cleanliness classification. Air infiltration should not go from a dirtier
cleanliness classification space to a cleaner cleanliness classification space.
“Solvent Packaging” will have a space pressurization of 0.11 in. w.g. Note, the
space pressure differential between the less critical spaces is 0.03 in. w.g. and the
space differential between the very critical “Solvent Packaging” and “Sterile Air
Lock” is 0.05 in. w.g. The 0.11 in. w.g. space pressure will not require special
structural reinforcements for walls or ceilings. Space pressures above 0.5 in. w.g.
should be evaluated for potentially needing additional structural reinforcement.
Step Four: Determine Space Supply
Airflow

Figure 4

The space cleanliness classification is the primary variable in determining a


cleanroom’s supply airflow. Looking at table 3, each clean classification has an air
change rate. For example, a Class 100,000 cleanroom has a 15 to 30 ach range. The
cleanroom’s air change rate should take the anticipated activity within the
cleanroom into account. A Class 100,000 (ISO 8) cleanroom having a low
occupancy rate, low particle generating process, and positive space pressurization
in relation to adjacent dirtier cleanliness spaces might use 15 ach, while the same
cleanroom having high occupancy, frequent in/out traffic, high particle generating
process, or neutral space pressurization will probably need 30 ach.

The designer needs to evaluate his specific application and determine the air
change rate to be used. Exhaust airflows and air infiltration of doors/openings
also affect exchange and supply of airflow. IEST has published recommended air
change rates in Standard 14644-4.

Looking at Figure 1, “Gown/Ungown” had the most in/out travel but is not a
process critical space, resulting in 20 a ch., ‘Sterile Air Lock” and “Bone Cement
Packaging Air Lock” are adjacent to critical production spaces and in the case of
the “Bone Cement Packaging Air Lock”, the air flows from the air lock into the
packaging space. Though these air locks have limited in/out travel and no
particulate generating processes, their critical importance as a buffer between
“Gown/Ungown” and manufacturing processes results in their having 40 ach.

“Final Packaging” places the bone cement/solvent bags into a secondary package
which is not critical and results in a 20 ach rate. “Bone Cement Packaging” is a
critical process and has a 40 ach rate. ‘Solvent Packaging” is a very critical process
which performed in Class 100 (ISO 5) laminar flow hoods within a Class 1,000
(ISO 6) cleanroom. ‘Solvent Packaging” has very limited in/out travel and low
process particulate generation, resulting in a 150 ach rate.

Step Five: Determine Space Air Exfiltration


Flow
The majority of cleanrooms are under positive pressure, resulting in planned air
exfiltrating into adjoining spaces having lower static pressure and unplanned air
exfiltration through electrical outlets, light fixtures, window frames, door frames,
wall/floor interface, wall/ceiling interface, and access doors. It is important to
understand rooms are not hermetically sealed and do have leakage. A well-sealed
cleanroom will have a 1% to 2% volume leakage rate. Is this leakage bad? Not
necessarily.

First, it is impossible to have zero leakage. Second, if using active supply, return,
and exhaust air control devices, there needs to be a minimum of 10% difference
between supply and return airflow to statically decouple the supply, return, and
exhaust air valves from each other. The amount of air exfiltrating through doors is
dependent upon the door size, the pressure differential across the door, and how
well the door is sealed (gaskets, door drops, closure).

We know the planned infiltration/exfiltration air goes from one space to the other
space. Where does the unplanned exfiltration go? The air relieves within the stud
space and out the top. Looking at our example project (Figure 1), the air
exfiltration through the 3- by 7- foot door is 190 cfm with a differential static
pressure of 0.03 in w.g. and 270 cfm with a differential static pressure of 0.05 in.
w.g..

Step Six: Determine Space Air Balance


Space air balance consists of adding all the airflow into the space (supply,
infiltration) and all the airflow leaving the space (exhaust, exfiltration, return)
being equal. Looking at the bone cement facility space air balance (Figure 2),
“Solvent Packaging” has 2,250 cfm supply airflow and 270 cfm of air exfiltration
to the ‘Sterile Air Lock”, resulting in a return airflow of 1,980 cfm. “Sterile Air
Lock” has 290 cfm of supply air, 270 cfm of infiltration from ‘Solvent Packaging”,
and 190 cfm exfiltration to “Gown/Ungown”, resulting in a return airflow of 370
cfm.

“Bone Cement Packaging” has 600 cfm supply airflow, 190 cfm of air filtration
from ‘Bone Cement Air Lock”, 300 cfm dust collection exhaust, and 490 cfm of
return air. “Bone Cement Air Lock” has 380 cfm supply air, 190 cfm exfiltration to
‘Bone Cement Packaging” has 670 cfm supply air, 190 cfm exfiltration to
“Gown/Ungown”. “Final Packaging” has 670 cfm supply air, 190 cfm exfiltration
to ‘Gown/Ungown”, and 480 cfm of return air. “Gown/Ungown” has 480 cfm of
supply air, 570 cfm of infiltration, 190 cfm of exfiltration, and 860 cfm of return
air.

We have now determined the cleanroom supply, infiltration, exfiltration, exhaust,


and return airflows. The final space return airflow will be adjusted during start-up
for unplanned air exfiltration.

Step Seven: Assess Remaining Variables


Other variables needing to be evaluated include:

 Temperature: Cleanroom workers wear smocks or full bunny suits over their
regular clothes to reduce particulate generation and potential contamination.
Because of their extra clothing, it is important to maintain a lower space
temperature for worker comfort. A space temperature range between 66°F
and 70° will provide comfortable conditions.
 Humidity: Due to a cleanroom’s high airflow, a large electrostatic charge is
developed. When the ceiling and walls have a high electrostatic charge and
space has a low relative humidity, airborne particulate will attach itself to
the surface. When the space relative humidity increases, the electrostatic
charge is discharged and all the captured particulate is released in a short
time period, causing the cleanroom to go out of specification. Having high
electrostatic charge can also damage electrostatic discharge sensitive
materials. It is important to keep the space relative humidity high enough to
reduce the electrostatic charge build-up. An RH (relative humidity) of 45%
(+-5%) is considered the optimal humidity level.
 Laminarity: Very critical processes might require laminar flow to reduce the
chance of contaminants getting into the air stream between the HEPA filter
and the process. IEST Standard #IEST-WG-CC006 provides airflow
laminarity requirements.
 Electrostatic Discharge: Beyond the space humidification, some processes
are very sensitive to electrostatic discharge damage and it is necessary to
install grounded conductive flooring.
 Noise Levels and Vibration: Some precision processes are very sensitive to
noise and vibration.
Step Eight: Determine Mechanical System
Layout
A number of variables affect a cleanroom’s mechanical system layout: space
availability, available funding, process requirements, cleanliness classification,
required reliability, energy cost, building codes, and local climate. Unlike normal
A/C systems, cleanroom A/C systems have substantially more supply air than
needed to meet cooling and heating loads.

Class 100,000 (ISO 8) and lower each Class 10,000 (ISO 7) cleanrooms can have
all the air go through the AHU. Looking at Figure 3, the return air and outside air
are mixed, filtered, cooled, reheated, and humidified before being supplied to
terminal HEPA filters in the ceiling. To prevent contaminant recirculation in the
cleanroom, the return air is picked up by low wall returns. For higher class 10,000
(ISO 7) and cleaner cleanrooms, the airflows are too high for all the air to go
through the AHU. Looking at Figure 4, a small portion of the return air is sent back
to the AHU for conditioning. The remaining air is returned to the circulation fan.

Step Nine: Perform Heating/Cooling


Calculations
When performing the cleanroom heating/cooling calculations, take the following
into consideration:

 Use the most conservative climate conditions (99.6% heating design, 0.4%
drybulb/median wetbulb cooling design, and 0.4% wetbulb/median drybulb
cooling design data);
 Include filtration into calculations;
 Include humidifier manifold heat into calculations;
 Include process load into calculations;
 Include recirculation fan heat into calculations.
Step Ten: Fight for Mechanical Room
Space
Cleanrooms are mechanically and electrically intensive. As the cleanroom’s
cleanliness classification becomes cleaner, more mechanical infrastructure space is
needed to provide adequate support to the cleanroom. Using a 1,000-sq-ft
cleanroom as an example, a Class 100,000 (ISO 8) cleanroom will need 250 to 400
sq ft of support space, a Class 10,000 (ISO 7) cleanroom will need 250 to 750 sq ft
of support space, a Class 1,000 (ISO 6) cleanroom will need 500 to 1,000 sq ft of
support space, and a Class 100 (ISO 5) cleanroom will need 750 to 1,500 sq ft of
support space.

The actual support square footage will vary depending upon the AHU airflow and
complexity (Simple: filter, heating coil, cooling coil, and fan; Complex: sound
attenuator, return fan, relief air section, outside air intake, filter section, heating
section, cooling section, humidifier, supply fan, and discharge plenum) and
number of dedicated cleanroom support systems (exhaust, recirculation air units,
chilled water, hot water, steam, and DI/RO water). It is important to communicate
the required mechanical equipment space square footage to the project architect
early in the design process.

FEATURES OF CLEANROOM
ISO 14644-1 Cleanroom Standards
maximum particles/m3
FED STD 209E
Class
equivalent
>=0.1 µm >=0.2 µm >=0.3 µm >=0.5 µm >=1 µm >=5 µm

ISO 1 10 2

ISO 2 100 24 10 4

ISO 3 1,000 237 102 35 8 Class 1

ISO 4 10,000 2,370 1,020 352 83 Class 10

ISO 5 100,000 23,700 10,200 3,520 832 29 Class 100

ISO 6 1,000,000 237,000 102,000 35,200 8,320 293 Class 1,000

ISO 7 352,000 83,200 2,930 Class 10,000

ISO 8 3,520,000 832,000 29,300 Class 100,000

ISO 9 35,200,000 8,320,000 293,000 Room Air

BS 5295 Cleanroom Standards


maximum particles/m3

Class >=0.5 µm >=1 µm >=5 µm >=10 µm >=25 µm

Class 1 3,000 0 0 0

Class 2 300,000 2,000 30

Class 3 1,000,000 20,000 4,000 300

Class 4 20,000 40,000 4,000


Device process technology

The particular manufacturing method used to make silicon chips, which is measured by how small the
transistor is. The driving force behind the design of integrated circuits is miniaturization, and process
technology boils down to the never-ending goal of smaller. It means more computing power per square
inch, and smallness enables the design of ultra-tiny chips that can be placed almost anywhere.

Feature Size Measured in Nanometers


The size of the features (the elements that make up the structures on a chip) are measured in nanometers. A
22 nm process technology refers to features 22 nm or 0.022 µm in size. Also called a "technology node"
and "process node," early chips were measured in micrometers (see table below).

Historically, the feature size referred to the length of the silicon channel between source and drain in field
effect transistors (see FET). Today, the feature size is typically the smallest element in the transistor or the
size of the gate.

From 1,000 Down to 90


The feature size of the 486 chip in 1989 was 1,000 nm (one micron). By 2003, it was 90 nm, reduced by a
little less than one millionth of a meter. What may seem like a minuscule reduction took thousands of man
years and billions of dollars worth of R&D. In the table below, note the dramatic reductions in the early
years of semiconductors.

It's Not Always Smaller


In the semiconductor industry, the goal has always been to pack more transistors on the same square
millimeter of silicon. At any given time, the smallest feature sizes are found on the latest, high-end CPUs
chips that can cost several hundred dollars. However, 8-bit and 16-bit microcontrollers (MCUs) are used
by the billions and sell for only a couple dollars. They have far fewer transistors and do not need to be so
dense. A USD $2 microcontroller may have feature sizes similar to the high-end chips a decade or two
earlier. See microcontroller.

A Miracle of Miniaturization
To understand how tiny these transistors elements are, using 10 nm feature sizes as an example, eight
thousand of them laid side-by-side are equal to the cross section of a human hair. See half-node and active
area
Semiconductor Feature Sizes
(approximate for all vendors)

Nanometers Micrometers
Year (nm) (µm)

1957 120,000 120.0


1963 30,000 30.0
1971 10,000 10.0
1974 6,000 6.0
1976 3,000 3.0
1982 1,500 1.5 **
1985 1,300 1.3 **
1989 1,000 1.0 **
1993 600 0.6 **
1996 350 0.35 **
1998 250 0.25 **
1999 180 0.18 **
2001 130 0.13 **
2003 90 0.09 **
2005 65 0.065
2008 45 0.045
2010 32 0.032
2012 22 0.022
2014 14 0.014
2017 10 0.010
2018 7 0.007
2019 5 0.005

Future
Non-Silicon
Method 1 0.001.
MOSFET TECHNOLOGY

Metal-oxide-silicon (MOS) technologies have made tremendous progress in


the recent years. Presently, MOS-technology is the dominating LSI-
technology. This article reviews the basic principles of operation of MOS-
transistors, -inverters and the state of the art processing together with
advanced transistor models for CAD applications. MOS-transistors have
been scaled from long to short channels. In line with this development, a
survey of long and short channel modelling is presented, including short
channel phenomena like hot-electron velocity-field characteristics and
charge-control by the drain. The implementation of advanced devices is
described in an outline of the most important processes, starting with metal
gate and silicon gate NMOS, which has been established as a standard
technology. CMOS-technology has become a competing VLSI-technology
showing a large potential for the achievement of minimum power density,
high speed and yet sufficient noise margins and of various compatibly
integrated devices such as field-effect and bipolar transistors. Therefore the
main CMOS technology concepts are pointed out in this paper. Novel
trends in processing techniques and some new MOS devices are reported to
indicate the possibilities for future advances in MOS technology.

It is shown that bipolar circuits can continue to play an important role in high
performance LSI and VLSI circuits, because power supply voltages and logic
swings can be minimized independently of transistor dimensions, and because the
speed degradation due to on-chip wiring capacitances in less severe than in
MOS/MES-FET types of circuit. General performance improvements (in speed and
packing density) of logic gates are obtained by increasing transistor fT, and
decreasing parasitic capacitances, series resistances and device areas, by using
oxide isolation, self-aligned techniques and polysilicon electrodes. Fast switching
diodes (such as Schottky barrier diodes and lateral poly-diodes) improve the
flexibility of circuit design. Logic circuits (such as I2L, LS, DTL, ISL, STL, ECL
and NTL), which already perform in LSI and VLSI circuits or are realistic
proposals for them, are discussed.

Most of the LSI/VLSI digital memory and microprocessor circuits is based on the
MOS Technology. More transistor and circuit functions can be achieved on a single
chip with MOS technology, which is the considerable advantage of the same over
bipolar circuits. Below given are the reasons for this advantage of MOS technology:
 Less chip area is demanded by an Individual MOS transistor, which results in more
functions in less area.
 Critical defects per unit chip area is low for a MOS transistor because it involves
fewer steps in the fabrication of a MOS transistor.
 Dynamic circuit techniques are practical in MOS technology, but not in bipolar
technology. A dynamic circuit technique involves use of fewer transistors to realise a
circuit function.

So you are already clear that because of above said reasons, its considerably cheap
to use MOS technology over Bipolar one.

Three types of MOS process are PMOS, NMOS and Complimentary MOS. Let’s take
a look at brief descriptions below.

p-Channel MOS or PMOS Technology


This MOS process operates at a very low data rate say 200Kbps to 1Mbps. PMOS is
also considered as the first MOS process which required special supply voltages as -
9 volts, -12 volts and so on.

n-Channel MOSÂ or NMOS Technology


We can say this is a second generation MOS process, after PMOS, which has
considerable improvement in data rates; say up to 2Mbps and resulted in the
construction of LSI circuits of a single standard +5volt supply.NMOS increases
circuits speed in sharp, because of reduction in the internal dimensions of devices;
which is contrary to (and an advantage over) bipolar circuits whose speed increases
gradually.The difference in performance between both circuits have steadily become
smaller for both LSI and VLSI because of steady improvements in pattern definition
capability.

Have you ever heard of Self aligned silicon gate NMOS ? It’s a commonly used and
popular version of MOS technology. Now a days, a technique named as local
oxidation is used for this process to improve circuit density and performance. HMOS,
SMOS and XMOS are the commonly used names by manufacturers for this. Older
versions of the process like Metal Gate NMOS and PMOS are not used now a days
for latest designs. A second layer of poly-silicon may be added to the process for
important memory applications.

Complementary MOS Technology


So you might have already got an idea from the name “Complimentary MOS” ? Its a
combination of both n-channel and p-channel devices in one chip. Compared to both
other process, CMOS is complex in fabrication and requires larger chip area. Biggest
advantage of a CMOS circuit is reduced power consumption (less than NMOS); it is
designed for zero power consumption in steady state condition for both logic states.
As you may already know, CMOS circuits are widely used in digital equipments like
watches, computers etc.

CMOS offers comparatively higher circuit density and high speed performance (used
in VLSI);and this is the primary reason why CMOS is still preferred despite it’s
complex manufacturing process. Memories and microprocessors made of CMOS
usually employ silicon gate process.

There are variations of MOS technology which offer either better performance or
density advantages over the standard process. Some of those are named as VMOS
(V-groove MOS), DSA (Diffusion Self Aligned), SOS (Silicon on Saphire), D-MOS
(Double diffused MOS) etc.

Simple MOSFET Structures


MOS Technology comprises of 3 process basically, p-channel MOS, n-channel MOS
and CMOS process. The basic purpose of all these process is to enhance MOSFET
performance one over the other, like lower power consumption, high power
capability, relaibility improvements, response speed etc.

PMOS Structure
The PMOS is the first device made in metal gate p-channel technology. PMOS infact
is an older version of the MOS process which is not used nowadays. A cross
sectional view of the PMOS structure is shown below.

The starting material is a single crystal Si that is doped n-type with phosphorus or
antimony with a doping level on the order of 1015 atoms/cm3.

So the process is like this, first grow a relatively thick oxide layer; say 1.5micros and
then etch windows for the source to drain diffusion. As a next step we have to boron
dope the source and drain regions with 2 to 4 micro meters depth. Lets next form the
gate oxide, that serves as the dielectric used for turning ON and OFF the MPS
device. The entire circuit is then metalised and etched so that there is metal over the
gate, drain, and the source. The metal layer should be 1 to 2 micrometers thick and
is deposited using an electron beam evaporator.
NMOS Structure:
An NMOS structure also follows a similar pattern or sequence as shown in the
crosssectional figure above; and is similar to PMOS except for the n+ regions which
are diffused into the p-type silicon substrate.
IC FABRICATION TECHNOLOGY: CLEANING

The first step in integrated circuit (IC) fabrication is preparing the high purity single crystal Si wafer.
This is the starting input to the fab. Typically, Si wafer refers to a single crystal of Si with a specific
orientation, dopant type, and resistivity (determined by dopant concentration). Typically, Si (100) or
Si (111) wafers are used. The numbers (100) and (111) refers to the orientation of the plane parallel
to the surface. The wafer should have structural defects, like dislocations, below a certain
permissible level and impurity (undesired) concentration of the order of ppb (parts per billion).
Consider the specs (specifications) of a 300 mm wafer shown in table 1 below. The thickness of the
wafer is less than 1 mm, while its diameter is 300 mm. Also, the wafers must have the 100 plane
parallel to the surface, to within 2◦ deviation, and typical impurity levels should be of the order of
ppm or less with metallic impurities of the order of ppb. For doped wafers, there should be specific
amounts of the desired dopants (p or n type) to get the required resistivity.
Single crystal Si manufacture

There are two main techniques for converting polycrystalline EGS into a single crystal ingot, which
are used to obtain the final wafers. 1. Czochralski technique (CZ) - this is the dominant technique for
manufacturing single crystals. It is especially suited for the large wafers that are currently used in IC
fabrication. 2. Float zone technique - this is mainly used for small sized wafers. The float zone
technique is used for producing specialty wafers that have low oxygen impurity concentration.

After the single crystal is obtained, this needs to be further processed to produce the wafers. For
this, the wafers need to be shaped and cut. Usually, industrial grade diamond tipped saws are used
for this process. The shaping operations consist of two steps 1. The seed and tang ends of the ingot
are removed. 2. The surface of the ingot is ground to get an uniform diameter across the length of
the ingot. Before further processing, the ingots are checked for resistivity and orientation. Resistivity
is checked by a four point probe technique and can be used to confirm the dopant concentration.
This is usually done along the length of the ingot to ensure uniformity. Orientation is measured by x-
ray diffraction at the ends (after grinding). After the orientation and resistivity checks, one or more
flats are ground along the length of the ingot. There are two types of flats. 1. Primary flat - this is
ground relative to a specific crystal direction. This acts as a visual reference to the orientation of the
wafer. 2. Secondary flat - this used for identification of the wafer, dopant type and orientation. The
different flat locations are shown in figure 7. p-type (111) Si has only one flat (primary flat) while all
other wafer types have two flats (with different orientations of the secondary flats). The primary flat
is typically longer than the secondary flat. Consider some typical specs of 150 mm wafers, shown in
table 4. Bow refers to the flatness of the wafer while ∆t refers to the thickness variation across the
wafer. After making the flats, the individual wafers are sliced per the required thickness. Inner
diameter (ID) slicing is the most commonly used technique. The cutting edge is located on the inside
of the blade, as seen in figure 8. Larger wafers are usually thicker, for mechanical integrity. After
cutting, the wafers are chemically etched to remove any damaged and contaminated regions. This is
usually done in an acid bath with a mixture of hydrofluoric acid, nitric acid, and acetic acid. After
etching, the surfaces are polished, first a rough abrasive polish, followed by a chemical mechanical
polishing (CMP) procedure. In CMP, a slurry of fine SiO2 particles suspended in aqueous NaOH
solution is used. The pad is usually a polyester material. Polishing happens both due to mechanical
abrasion and also reaction of the silicon with the NaOH solution. Wafers are typically single side or
double side polished. Large wafers are usually double side polished so that the backside of the
wafers can be used for patterning. But wafer handling for double side polished wafers should be
carefully controlled to avoid scratches on the backside. Typical 300 mm wafers used for IC
manufacture are handled by robot arms and these are made of ceramics to minimize scratches.
Smaller wafers (3” and 4” wafers) used in labs are usually single side polished. After polishing, the
wafers are subjected to a final inspection before they are packed and shipped to the fab.
Typocal wafer diameters are 5,6,8,12 inches.
Usually in microelectronics fabrication the wafer-cleaning steps are
performed before the high-temperature, layer deposition and/or
lithography process steps. The intention of the cleaning is to
remove particles and photoresistant residues, metallic impurities,
and organic contamination.
Cleaning and Etching of wafers

Surface of semiconductor wafer gets contaminated during device processing. The source of
contaminants are ambient air, storage ambient, process gases, chemicals, materials, water etc
which are used in the fabrication processes. Processing tools as well as personnel operating in
the clean- rooms are also sources of contamination. The most prevalent contaminants are
particles and they may cause a catastrophic failure during device manufacturing process. The
measure of the air quality of a clean room is described in Federal Standard 209. Clean rooms are
rated as Class 10K, where there exist no more than 10,000 particles larger than 0.5 microns in
any given cubic foot of air; Class 1K, where there exists no more than 1000 particles; and Class
100, where there exist no more than 100 particles. These small particles are controlled in a
clean-room by using High Efficiency Particulate Air (HEPA) filters.

Another type of contaminants which degrade the devices are metallic contaminants originating
primarily from liquid chemicals, water and process tools. The most common metallic
contaminants are iron (Fe), aluminum (Al), copper (Cu), nickel (Ni) as well as ionic metals such
as sodium (Na) and calcium (Ca). Organic contaminants are present in ambient air, storage
containers and can arise from photoresists. Organic compounds readily adsorbed on surfaces
adversely affect device properties. Native oxides as well as moisture from the ambient air or wet
processes adversely affect the devices and can be considered as a contaminant and its removal
is a part of cleaning process.

As total elimination of contaminants is not possible, methods of semiconductor surface cleaning


are employed throughout the device manufacturing sequence. The cleaning can be achieved by
a chemical reaction with a reactant and contaminant on the surface, by the physical interaction
between cleaning ambient and the surface, the momentum transfer between high kinetic energy
particles directed toward the contaminant etc. In Wet Cleaning, contaminant is removed via
selective chemical reaction in the liquid-phase by its dissolution in the solvent, or its conversion
into the soluble compound. Typically, reaction process is enhanced by ultrasonic agitation. In Dry
Cleaning, contaminant is removed via chemical reaction in the gas phase converting it into a
volatile compound. Wet cleaning is the dominant cleaning technology in semiconductor device
manufacturing. Wet cleans use combinations of acids and solvents, oxidize, etch, and scrub
contaminants from the wafer surface. An integral part of every wet cleaning scheme is rinses in
ultra-pure deionized (DI) water which stops chemical reaction on the wafer surface and washes
off reactants and reaction products. Wet cleaning is always completed with a wafer drying
process.

RCA clean Wet cleaning recipes first proposed over 30 years ago presents a complete cleaning
process to remove from the surface heavy organics, particles, and metallic contaminants as well
as native/chemical oxide. Typically the first step is to remove organic contamination remaining on
the surface. The H2SO4:H2O2 solution at 100°C-130°C, also known as SPM (Sulfuric Peroxide
Mixture), or "piranha" clean. NH4OH : H2O2 : H2O mixture 1:1:50, at temperature ~ 70°C with
ultrasonic agitation is used to remove particles.
OXIDATION

Oxidation refers to the conversion of the silicon wafer to silicon oxide (SiO2 or more generally SiOx).
The ability of Si to form an oxide layer is very important since this is one of the reasons for choosing
Si over Ge. The Horni transistor design, which was used in the first integrated circuit by Robert
Noyce, was made of Si and the formation of SiOx helped in fabricating a planar device.

Si exposed to ambient conditions has a native oxide on its surface. The native oxide is approximately
3 nm thick at room temperature. But this is too thin for most applications and hence a thicker oxide
needs to be grown. This is done by consuming the underlying Si to form SiOx. This is a grown layer. It
is also possible to grow SiOx by a chemical vapor deposition process using Si and O precursor
molecules. In this case, the underlying Si in the wafer is not consumed. This is called a deposited
layer.

2 Oxidation types

In the case of grown oxide layers, there are two main growth mechanisms

1. Dry oxidation – Si reacts with O2 to form SiO2.


2. Si (s) + O2 (g) → SiO2 (s)
3. 2. Wet oxidation – Si reacts with water (steam) to form SiO2.
4. Si (s) + 2H2O (g) → SiO2 (s) + 2H2 (g)

In both cases, Si is supplied by the underlying wafer. Dry and wet oxidation need high
temperature (900 - 1200 ◦C) for growth, though the kinetics are different, which is why this
process is called thermal oxidation. Since the underlying Si is consumed, the Si/SiO2 interface
moves deeper into the wafer.

There is also a volume expansion since the densities of the oxide layer and silicon are different.
Thus, the final thickness is higher than the initial Si thickness
Semiconductor Lithography
The fabrication of an integrated circuit (IC) requires a variety of physical and chemical processes
performed on a semiconductor (e.g., silicon) substrate. In general, the various processes used to make
an IC fall into three categories: film deposition, patterning, and semiconductor doping. Films of both
conductors (such as polysilicon, aluminum, and more recently copper) and insulators (various forms of
silicon dioxide, silicon nitride, and others) are used to connect and isolate transistors and their
components. Selective doping of various regions of silicon allows the conductivity of the silicon to be
changed with the application of voltage. By creating structures of these various components, millions of
transistors can be built and wired together to form the complex circuitry of a modern microelectronic
device. Fundamental to all of these processes is lithography, i.e., the formation of three-dimensional
(3D) relief images on the substrate for subsequent transfer of the pattern to the substrate.
The word lithography comes from the Greek lithos, meaning stones, and graphia, meaning to write. It
means quite literally writing on stones. In the case of semiconductor lithography, our stones are silicon
wafers and our patterns are written with a light-sensitive polymer called photoresist. To build the
complex structures that make up a transistor and the many wires that connect the millions of transistors
of a circuit, lithography and pattern transfer steps are repeated at least 10 times, but more typically are
done 20 to 30 times to make one circuit. Each pattern being printed on the wafer is aligned to the
previously formed patterns and slowly the conductors, insulators, and selectively doped regions are built
up to form the final device
The importance of lithography can be appreciated in two ways. First, due to the large
number of lithography steps needed in IC manufacturing, lithography typically accounts
for about 30 percent of the cost of manufacturing. Second, lithography tends to be the
technical limiter for further advances in feature size reduction and thus transistor speed
and silicon area. Obviously, one must carefully understand the trade-offs between cost
and capability when developing a lithography process. Although lithography is certainly
not the only technically important and challenging process in the IC manufacturing flow,
historically, advances in lithography have gated advances in IC cost and performance.

The general sequence of processing steps for a typical photolithography process is as


follows: substrate preparation, photoresist spin coat, prebake, exposure, post-
exposure bake, development, and postbake. A resist strip is the final operation in the
lithographic process, after the resist pattern has been transferred into the underlying
layer. This sequence is shown diagrammatically in Figure 1-1, and is generally
performed on several tools linked together into a contiguous unit called a lithographic
cluster. A brief discussion of each step is given below.
1. Substrate Preparation

Substrate preparation is intended to improve the adhesion of the photoresist material to


the substrate. This is accomplished by one or more of the following processes:
substrate cleaning to remove contamination, dehydration bake to remove water, and
addition of an adhesion promoter. Substrate contamination can take the form of
particulates or a film and can be either organic or inorganic. Particulates result in
defects in the final resist pattern, whereas film contamination can cause poor adhesion
and subsequent loss of linewidth control. Particulates generally come from airborne
particles or contaminated liquids (e.g., dirty adhesion promoter). The most effective way
of controlling particulate contamination is to eliminate their source. Since this is not
always practical, chemical/mechanical cleaning is used to remove particles. Organic
films, such as oils or polymers, can come from vacuum pumps and other machinery,
body oils and sweat, and various polymer deposits leftover from previous processing
steps. These films can generally be removed by chemical, ozone, or plasma stripping.
Similarly, inorganic films, such as native oxides and salts, can be removed by chemical
or plasma stripping. One type of contaminant – adsorbed water – is removed most
readily by a high temperature process called a dehydration bake.

Heat treatment at 200C for 2 hours for removal of moisture from the oxide coated
sample.

2. Photoresist Coating

A thin, uniform coating of photoresist at a specific, well controlled thickness is


accomplished by the seemingly simple process of spin coating. The photoresist,
rendered into a liquid form by dissolving the solid components in a solvent, is poured
onto the wafer, which is then spun on a turntable at a high speed producing the desired
film. Stringent requirements for thickness control and uniformity and low defect density
call for particular attention to be paid to this process, where a large number of
parameters can have significant impact on photoresist thickness uniformity and control.
There is the choice between static dispense (wafer stationary while resist is dispensed)
or dynamic dispense (wafer spinning while resist is dispensed), spin speeds and times,
and accelerations to each of the spin speeds. Also, the volume of the resist dispensed
and properties of the resist (such as viscosity, percent solids, and solvent composition)
and the substrate (substrate material and topography) play an important role in the
resist thickness uniformity.

3. Post-Apply Bake

After coating, the resulting resist film will contain between 20 – 40% by weight solvent.
The post-apply bake process, also called a softbake or a prebake, involves drying the
photoresist after spin coat by removing this excess solvent. The main reason for
reducing the solvent content is to stabilize the resist film. At room temperature, an
unbaked photoresist film will lose solvent by evaporation, thus changing the properties
of the film with time. By baking the resist, the majority of the solvent is removed and
the film becomes stable at room temperature. There are four major effects of removing
solvent from a photoresist film: (1) film thickness is reduced, (2) post-exposure bake
and development properties are changed, (3) adhesion is improved, and (4) the film
becomes less tacky and thus less susceptible to particulate contamination. Typical
prebake processes leave between 3 and 8 percent residual solvent in the resist film,
sufficiently small to keep the film stable during subsequent lithographic processing.

Unfortunately, there are other consequences of baking most photoresists. At


temperatures greater than about 70°C the photosensitive component of a typical resist
mixture, called the photoactive compound (PAC), may begin to decompose [1.3,1.4].
Also, the resin, another component of the resist, can crosslink and/or oxidize at
elevated temperatures. Both of these effects are undesirable. Thus, one must search for
the optimum prebake conditions that will maximize the benefits of solvent evaporation
and minimize the detriments of resist decomposition. For chemically amplified resists,
residual solvent can significantly influence diffusion and reaction properties during the
post-exposure bake, necessitating careful control over the post-apply bake process.
Fortunately, these modern resists do not suffer from significant decomposition of the
photosensitive components during prebake.

There are several methods that can be used to bake photoresists. The most obvious
method is an oven bake. Convection oven baking of conventional photoresists at 90°C
for 30 minutes was typical during the 1970s and early 1980s. Although the use of
convection ovens for the prebaking of photoresist was once quite common, currently
the most popular bake method is the hot plate. The wafer is brought either into
intimate vacuum contact with or close proximity to a hot, high-mass metal plate. Due
to the high thermal conductivity of silicon, the photoresist is heated to near the hot
plate temperature quickly (in about 5 seconds for hard contact, or about 20 seconds for
proximity baking). The greatest advantage of this method is an order of magnitude
decrease in the required bake time over convection ovens, to about one minute, and the
improved uniformity of the bake. In general, proximity baking is preferred to reduce the
possibility of particle generation caused by contact with the backside of the wafer.

When the wafer is removed from the hotplate, baking continues as long as the wafer is
hot. The total bake process cannot be well controlled unless the cooling of the wafer is
also well controlled. As a result, hotplate baking is always followed immediately by a
chill plate operation, where the wafer is brought in contact or close proximity to a cool
plate (kept at a temperature slightly below room temperature). After cooling, the wafer
is ready for its lithographic exposure.

5. Alignment and Exposure

The basic principle behind the operation of a photoresist is the change in solubility of
the resist in a developer upon exposure to light (or other types of exposing radiation).
In the case of the standard diazonaphthoquinone positive photoresist, the photoactive
compound (PAC), which is not soluble in the aqueous base developer, is converted to a
carboxylic acid on exposure to UV light in the range of 350 - 450nm. The carboxylic
acid product is very soluble in the basic developer. Thus, a spatial variation in light
energy incident on the photoresist will cause a spatial variation in solubility of the resist
in developer.

Contact and proximity lithography are the simplest methods of exposing a photoresist
through a master pattern called a photomask (Figure 1-4). Contact lithography offers
high resolution (down to about the wavelength of the radiation), but practical problems
such as mask damage and resulting low yield make this process unusable in most
production environments. Proximity printing reduces mask damage by keeping the
mask a set distance above the wafer (e.g., 20 μm). Unfortunately, the resolution limit is
increased to greater than 2 to 4 μm, making proximity printing insufficient for today’s
technology. By far the most common method of exposure is projection printing.

Lithographic printing in semiconductor manufacturing has evolved from contact


printing (in the early 1960s) to projection printing (from the mid 1970s to today).

Photoresist pattern on a silicon substrate showing prominent standing waves.

Development

Once exposed, the photoresist must be developed. Most commonly used photoresists
use aqueous bases as developers. In particular, tetramethyl ammonium hydroxide
(TMAH) is used in concentrations of 0.2 - 0.26 N. Development is undoubtedly one of
the most critical steps in the photoresist process. The characteristics of the resist-
developer interactions determine to a large extent the shape of the photoresist profile
and, more importantly, the linewidth control.
The method of applying developer to the photoresist is important in controlling the
development uniformity and process latitude. In the past, batch development was the
predominant development technique. A boat of some 10-20 wafers or more are
developed simultaneously in a large beaker, usually with some form of agitation. With
the push towards in-line processing, however, other methods have become prevalent.
During spin development wafers are spun, using equipment similar to that used for spin
coating, and developer is poured onto the rotating wafer. The wafer is also rinsed and
dried while still spinning. Spray development has been shown to have good results
using developers specifically formulated for this dispense method. Using a process
identical to spin development, the developer is sprayed, rather than poured, on the
wafer by using a nozzle that produces a fine mist of developer over the wafer (Figure 1-
8). This technique reduces developer usage and gives more uniform developer
coverage. Another in-line development strategy is called puddle development. Again
using developers specifically formulated for this process, the developer is poured onto
a stationary wafer that is then allowed to sit motionless for the duration of the
development time. The wafer is then spin rinsed and dried. Note that all three in-line
processes can be performed in the same piece of equipment with only minor
modifications, and combinations of these techniques are frequently used.

ETCHING

Etching of the sample by 10percent HF solution.

Strip

After the imaged wafer has been processed (e.g., etched, ion implanted, etc.) the
remaining photoresist must be removed. There are two classes of resist stripping
techniques: wet stripping using organic or inorganic solutions, and dry (plasma)
stripping. A simple example of an organic stripper is acetone. Although commonly used
in laboratory environments, acetone tends to leave residues on the wafer (scumming)
and is thus unacceptable for semiconductor processing. Most commercial organic
strippers are phenol-based and are somewhat better at avoiding scum formation.
However, the most common wet strippers for positive photoresists are inorganic acid-
based systems used at elevated temperatures.

Wet stripping has several inherent problems. Although the proper choice of strippers
for various applications can usually eliminate gross scumming, it is almost impossible
to remove the final monolayer of photoresist from the wafer by wet chemical means. It
is often necessary to follow a wet strip by a plasma descum to completely clean the
wafer of resist residues. Also, photoresist which has undergone extensive hardening
(e.g., deep-UV hardening) and been subjected to harsh processing conditions (e.g.,
high energy ion implantation) can be almost impossible to strip chemically. For these
reasons, plasma stripping has become the standard in semiconductor processing. An
oxygen plasma is highly reactive towards organic polymers but leaves most inorganic
materials (such as are found under the photoresist) untouched.
DIFFUSION

HOT PROBE METHOD

A hot point probe is a method of determining quickly whether a semiconductor sample is n


(negative) type or p (positive) type. A voltmeter or ammeter is attached to the sample, and a heat
source, such as a soldering iron, is placed on one of the leads. The heat source will cause
charge carriers (electrons in an n-type, electron holes in a p-type) to move away from the lead.
The heat from the probe creates an increased number of higher energy carriers which then
diffuse away from the contact point. This will cause a current/voltagedifference. For example, if
the heat source is placed on the positive lead of a voltmeter attached to an n-type
semiconductor, a positive voltage reading will result as the area around the heat source/positive
lead becomes positively charged.[1]A simple explanation for experiment is that the thermally
excited majority free charged carriers are translated with in the semiconductor form the hot probe
to be he cold probe. The mechanism for this motion with in semiconductor is of a diffusion type
since the material is uniformly doped.

REQUIREMENT OF DIFFUSION
1. Temperature : 1000C

2.Gas (a) N2:1L?minute

(b)02:1L/minute

Predip time 15 minutes

Driving time 3 hours

10 Percent HF solution

ACTIVATION

Boron Nitride absorbed moisture. So it should be heated for 20 to 25 hours at 1000C. Ambient
boron hydroxide is formed on its surface. It is then kept on an oven until it is used. It is called
activation.

olid sources are available in two forms; such as tablet/granular and disc/wafer form. BN discs are
most commonly used oxidized at 750 - 1100°C to serve as the boron diffusion source. Dopants
are introduced into the silicon substrate using a two step, high temperature process. The first
diffusion (predeposition) introduces dopants into the wafer. The second diffusion (drive)
redistributes the dopants and allows the dopants to diffuse into the wafer more deeply (up to ~3
micrometers). The goal of the dopant predeposition diffusion is to move dopant atoms from a
source to the wafer, and later allow the dopants to diffuse into the wafer. In order for the dopants
to move into the silicon, they must be given energy, usually in the form of heat. In order for the
diffusion to occur in a reasonable time, the temperature must be very high (900ºC <T<1200º). At
this temperature the dopant (in the form of an oxide) reacts with the exposed silicon surface to
form a highly doped glass. It is from this glass that the dopants can then diffuse into the wafer.
Diffusion Of p-Type Impurity
Boron is an almost exclusive choice as an acceptor impurity in silicon. It has a
moderate diffusion coefficient, typically of order I0-16 m2/sec at 1150°C which is
convenient for precisely controlled diffusion. It has a solid solubility limit of around 5 x
1026 atoms/m3, so that surface concentration can be widely varied, but most
reproducible results are obtained when the concentration is approximately 1024/m3,
which is typical for transistor base diffusions.

 Boron Diffusion using B2H6 (Diborane) Source

This is a gaseous source for boron. This can be directly introduced into the diffusion
furnace. A number of other gases are metered into the furnace. The principal gas
flow in the furnace will be nitrogen (N2) which acts as a relatively inert gas and is
used as a carrier gas to be a dilutent for the other more reactive gases. The N 2,
carrier gas will generally make up some 90 to 99 percent of the total gas flow. A
small amount of oxygen and very small amount of a source of boron will make up the
rest of the gas flow. This is shown in the figure below. The following reactions will be
occurring simultaneously at the surface of the silicon wafers:

Si + 02 = SiO2 (silica glass)

2B2H6 + 302 = B2O3 (boron glass) + 6H2

This process is the chemical vapour deposition (CVD) of a glassy layer on (lie silicon
surface which is a mixture of silica glass (Si02) and boron glass (B203) is called
borosilica glass (BSG). The BSG glassy layer, shown in the figure below, is a
viscous liquid at the diffusion temperatures and the boron atoms can move around
relatively easily.
Diffusion Of Dopants

Furthermore, the boron concentration in the BSG is such that the silicon surface will
be saturated with boron at the solid solubility limit throughout the time of the diffusion
process as long as BSG remains present. This is constant source (erfc) diffusion. It
is often called deposition diffusion. This diffusion step is referred as pre-deposition
step in which the dopant atoms deposit into the surface regions (say 0.3 micro
meters depth) of the silicon wafers. The BSG is preferable because it protects the
silicon atoms from pitting or evaporating and acts as a “getter” for undesirable
impurities in the silicon. It is etched off before next diffusion as discussed below.

The pre-deposition step, is followed by a second diffusion process in which the


external dopant source (BSG) is removed such that no additional dopants cuter the
silicon. During this diffusion process the dopants that are already in the silicon move
further in and are thus redistributed. The junction depth increases, and at the same
time the surface concentration decreases. This type of diffusion is called drive-in, or
redistribution, or limited-source (Gaussian diffusion).

 Boron Diffusion using BBr3i (Boron Tribromide) Source

This is a liquid source of boron. In this case a controlled flow of carrier gas (N2,) is
bubbled through boron tribromide, as shown in the figure below, which with oxygen
again produces boron trioxide (BSG) at the surface of the wafers as per following
reaction :

4BBr3 + 302 = B203 + 2Br2


Diffusion of n-Type Impurity
For phosphorus diffusion such compounds as PH3 (phosphine) and POCl3
(phosphorus oxychloride) can be used. In the case of a diffusion using PoCI3, the
reactions occurring at the silicon wafer surfaces will be:

Si + 02 = SiO2 (silica glass)

4POCl + 302 = 2P205 + 6Cl2

This will result in the production of a glassy layer on the silicon wafers (hat is a
mixture of phosphorus glass and silica glass called phosphorosilica glass (PSG),
which is a viscous liquid at the diffusion temperatures. The mobility of the
phosphorus atoms in this glassy layer and the phosphorus concentration is such that
the phosphorus concentration at the silicon surface will be maintained at the solid
solubility limit throughout the time of the diffusion process (similar processes occur
with other dopants, such as the case of arsenic, in winch arsenosilica glass is formed
on the silicon surface.

The rest of the process for phosphorus diffusion is similar to boron diffusion, that is,
after deposition step, drive-in diffusion is carried out.

P205 is a solid source for phosphorus impurity and can be used in place of POCl 3.
However POCl3 offers certain advantages overP205 such as easier source handling,
simple furnace requirements, similar glassware for low and high surface
concentrations and better control of impurity density from wafer to wafer and from
run to run.

You might also like