Professional Documents
Culture Documents
Mechatronics Engineering Mce 505 Term Paper
Mechatronics Engineering Mce 505 Term Paper
INDUSTRY
WRITTEN BY
20151012336T
TECHNOLOGY
SUBMITTED TO
DR OKAFOR C. KENNEDY
NOVEMBER,2021
1|P a ge
1.0 THE SCALES OF INTEGRATION IN SEMICOBDUCTOR
DIGITAL IC
An integrated circuit or monolithic integrated circuit (also referred to as an IC, a chip, or
a microchip) is a set of electronic circuits on one small flat piece (or "chip")
semiconductor field-effect transistors) integrate into a small chip. This results in circuits that
are orders of magnitude smaller, faster, and less expensive than those constructed of
discrete electronic components. The IC's mass production capability, reliability, and building-
block approach to integrated circuit design has ensured the rapid adoption of standardized ICs
in place of designs using discrete transistors. ICs are now used in virtually all electronic
equipment and have revolutionized the world of electronics. Computers, mobile phones, and
other digital home appliances are now inextricable parts of the structure of modern societies,
made possible by the small size and low cost of ICs such as modern computer
silicon (MOS) semiconductor device fabrication. Since their origins in the 1960s, the size,
speed, and capacity of chips have progressed enormously, driven by technical advances that
fit more and more MOS transistors on chips of the same size – a modern chip may have many
billions of MOS transistors in an area the size of a human fingernail. These advances, roughly
following Moore's law, make computer chips of today possess millions of times the capacity
and thousands of times the speed of the computer chips of the early 1970s.
ICs have two main advantages over discrete circuits: cost and performance. Cost is low
because the chips, with all their components, are printed as a unit by photolithography rather
than being constructed one transistor at a time. Furthermore, packaged ICs use much less
material than discrete circuits. Performance is high because the IC's components switch
2|P a ge
quickly and consume comparatively little power because of their small size and proximity.
The main disadvantage of ICs is the high cost to design them and fabricate the
required photomasks. This high initial cost means ICs are only commercially viable
width packages with up to 16 (later, 18, 22 or more) pins – have long ago migrated to the SO
package and even smaller packs. LSI devices with up to 64 or 68 pins came in 0.6 in. wide
packs, but then migrated to a variety of package types, including leaded and leadless chip
carriers, J lead packs, pin grid arrays, etc., with the latest development being ball pin arrays.
But processors, DSP chips and the like tend to require so many lead outs that they hardly
come under the heading of tiny devices, even though truly small considering the number of
pins. In addition to processors, DSP chips, etc., package types with a large number of pins are
also used for custom- and semi-custom logic devices, and programmable arrays of various
types. These enable all the logic functions associated with a product to be swept up into a
single device, reducing the size and cost of products which are produced in huge quantities.
But this approach is not without its drawbacks, often leading to practical difficulties at the
layout stage. For example, on a ‘busy’ densely packed board, the odd logic function such as
an inverter, AND gate or whatever, may be required at the opposite end of the board from
1.2 TYPES
Integrated circuits can be classified into analog, digital and mixed signal, consisting of analog
3|P a ge
Digital integrated circuits can contain billions of logic gates, flip-flops, multiplexers, and
other circuits in a few square millimetres. The small size of these circuits allows high speed,
low power dissipation, and reduced manufacturing cost compared with board-level
The die from an Intel 8742, an 8-bit NMOS microcontroller that includes a CPU running at
12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip
Among the most advanced integrated circuits are the microprocessors or "cores", used in
chips and application-specific integrated circuits (ASICs) are examples of other families of
integrated circuits.
In the 1980s, programmable logic devices were developed. These devices contain circuits
whose logical function and connectivity can be programmed by the user, rather than being
various LSI-type functions such as logic gates, adders and registers. Programmability comes
in various forms – devices that can be programmed only once, devices that can be erased and
then re-programmed using UV light, devices that can be (re)programmed using flash
memory, and field-programmable gate arrays (FPGAs) which can be programmed at any
time, including during operation. Current FPGAs can (as of 2016) implement the equivalent
Analog ICs, such as sensors, power management circuits, and operational amplifiers (op-
amps), process continuous signals, and perform analog functions such as amplification, active
ICs can combine analog and digital circuits on a chip to create functions such as analog-to-
digital converters and digital-to-analog converters. Such mixed-signal circuits offer smaller
4|P a ge
size and lower cost but must account for signal interference. Prior to the late
1990s, radios could not be fabricated in the same low-cost CMOS processes as
microprocessors. But since 1998, radio chips have been developed using RF
CMOS processes. Examples include Intel's DECT cordless phone, or 802.11 (Wi-Fi) chips
• Analog ICs are categorized as linear integrated circuits and RF circuits (radio
frequency circuits).
• Mixed-signal integrated circuits are categorized as data acquisition ICs (including A/D
1.3 GENERATION
In the early days of simple integrated circuits, the technology's large scale limited each chip
to only a few transistors, and the low degree of integration meant the design process was
relatively simple. Manufacturing yields were also quite low by today's standards. As metal–
transistors could be placed on one chip, and good designs required thorough planning, giving
rise to the field of electronic design automation, or EDA. Some SSI and MSI chips,
5|P a ge
like discrete transistors, are still mass-produced, both to maintain old equipment and build
new devices that require only a few gates. The 7400 series of TTL chips, for example, has
medium-scale
MSI 1968 10 to 500 13 to 99
integration
tens of transistors provided a few logic gates, and early linear ICs such as the Plessey SL201
or the Philips TAA320 had as few as two transistors. The number of transistors in an
integrated circuit has increased dramatically since then. The term "large scale integration"
(LSI) was first used by IBM scientist Rolf Landauer when describing the theoretical
concept; that term gave rise to the terms "small-scale integration" (SSI), "medium-scale
6|P a ge
integration" (MSI), "very-large-scale integration" (VLSI), and "ultra-large-scale integration"
SSI circuits were crucial to early aerospace projects, and aerospace projects helped inspire
development of the technology. Both the Minuteman missile and Apollo program needed
lightweight digital computers for their inertial guidance systems. Although the Apollo
Guidance Computer led and motivated integrated-circuit technology, it was the Minuteman
missile that forced it into mass-production. The Minuteman missile program and various
other United States Navy programs accounted for the total $4 million integrated circuit
market in 1962, and by 1968, U.S. Government spending on space and defense still
The demand by the U.S. Government supported the nascent integrated circuit market until
costs fell enough to allow IC firms to penetrate the industrial market and eventually
the consumer market. The average price per integrated circuit dropped from $50.00 in 1962
to $2.33 in 1968. Integrated circuits began to appear in consumer products by the turn of the
receivers.
chips. Following Mohamed M. Atalla's proposal of the MOS integrated circuit chip in 1960,
the earliest experimental MOS chip to be fabricated was a 16-transistor chip built by Fred
Heiman and Steven Hofstein at RCA in 1962. The first practical application of MOS SSI
7|P a ge
MOSFET scaling technology made it possible to build high-density chips. By 1964, MOS
chips had reached higher transistor density and lower manufacturing costs than bipolar chips.
In 1964, Frank Wanlass demonstrated a single-chip 16-bit shift register he designed, with a
then-incredible 120 MOS transistors on a single chip. The same year, General
Microelectronics introduced the first commercial MOS integrated circuit chip, consisting of
120 p-channel MOS transistors. It was a 20-bit shift register, developed by Robert Norman
and Frank Wanlass. MOS chips further increased in complexity at a rate predicted by Moore's
law, leading to chips with hundreds of MOSFETs on a chip by the late 1960s.
led to "large-scale integration" (LSI) by the mid-1970s, with tens of thousands of transistors
per chip.
The masks used to process and manufacture SSI, MSI and early LSI and VLSI devices (such
as the microprocessors of the early 1970s) were mostly created by hand, often
using Rubylith-tape or similar. For large or complex ICs (such as memories or processors),
this was often done by specially hired professionals in charge of circuit layout, placed under
the supervision of a team of engineers, who would also, along with the circuit designers,
Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that
began to be manufactured in moderate quantities in the early 1970s, had under 4,000
transistors. True LSI circuits, approaching 10,000 transistors, began to be produced around
8|P a ge
1.34 Very-large-scale integration (VLSI)
1.35 ULSI, WSI, SoC and 3D-IC
To reflect further growth of the complexity, the term ULSI that stands for "ultra-large-scale
Wafer-scale integration (WSI) is a means of building very large integrated circuits that uses
an entire silicon wafer to produce a single "super-chip". Through a combination of large size
and reduced packaging, WSI could lead to dramatically reduced costs for some systems,
notably massively parallel supercomputers. The name is taken from the term Very-Large-
Scale Integration, the current state of the art when WSI was being developed. [94]
A system-on-a-chip (SoC or SOC) is an integrated circuit in which all the components needed
for a computer or other system are included on a single chip. The design of such a device can
be complex and costly, and whilst performance benefits can be had from integrating all
needed components on one die, the cost of licensing and developing a one-die machine still
outweigh having separate devices. With appropriate licensing, these drawbacks are offset by
lower manufacturing and assembly costs and by a greatly reduced power budget: because
signals among the components are kept on-die, much less power is required (see Packaging).
Further, signal sources and destinations are physically closer on die, reducing the length of
wiring and therefore latency, transmission power costs and waste heat from communication
between modules on the same chip. This has led to an exploration of so-called Network-on-
A three-dimensional integrated circuit (3D-IC) has two or more layers of active electronic
components that are integrated both vertically and horizontally into a single circuit.
Communication between layers uses on-die signalling, so power consumption is much lower
9|P a ge
than in equivalent separate circuits. Judicious use of short vertical wires can substantially
• 4000-series integrated circuits, the CMOS counterpart to the 7400 series (see
• Intel 4004, generally regarded as the first commercially available microprocessor, which
led to the famous 8080 CPU and then the IBM PC's 8088, 80286, 486 etc.
• The MOS Technology 6502 and Zilog Z80 microprocessors, used in many home
the 68000 and 88000 series (used in some Apple computers and in the 1980s
10 | P a g e
2.0 MOORE’S LAW AND ITS APPLICATION IN MODERN COMPUTING
Moore’s law was an innovative observation that was first conceived by Gordon Moore at
Intel almost four decades ago. It has also shown to be surprisingly accurate over time. In it’s
original form, Moore's law predicted that the number of transistors in a chip would improve
by a factor of two every eighteen months. But the concepts behind Moore’s law can be
applied more broadly to innovation and technology in general. We have seen Moore’s
influence in a number of verticals already. In its generalized form, the rate described in
Moore’s law depends on the specific vertical of interest. In optical networking for example,
the rate of innovation is closer to ten times that of transistors on in chip and in certain speech
recognition technologies, the improvements are at least five times that of Moore’s law for
electronics.
Apple’s Siri and Amazon’s Alexa are prime examples of speech technology innovations.
Both Google and Facebook are also actively working on their own speech recognition
solutions. Apple is rumored to be designing new hardware to cram in to their new smartphone
with applications not only in speech recognition but also artificial intelligence (AI), machine
Moore's law has been around for over 37 years and is still the defacto standard by which we
measure technological advances over time. Microscopic building blocks continue to shrink in
an effort meet the demands of new massively-driven computing applications ranging from
artificial intelligence to autonomous vehicles and even to virtual and augmented reality. It
was not too long ago when integrated circuits measured transistor counts in the hundreds of
thousands. A state-of-the-art chip might have measured in millions – today they measure in
the billions and Moore’s law predicts this trend will only continue.
11 | P a g e
2.1 MOORE’S LAW PROGRESS
There is, of course, a theoretical limit to how small we can shrink a transistor – that limit is
thought to be 5 nm. At that point, we will be forced to move to quantum computing and
Moore’s law will cease to hold true. But that time has not yet arrived. Many researchers are
working on the quantum computing model, not least of which is Peter Shor at MIT. But until
quantum computing evolves beyond its current state of infancy and becomes a viable solution
Moore’s law will continue to hold as we seek to squeeze as much juice out of our current
process as possible – Intel continues to be the leader in this space. It is only through this
relentless persuit that we can power the computational needs of 21st-century applications
Just as we can apply the generalized form of Moore’s law to areas such as speech recognition
and optical networking, we can also model the rate of growth in memory, hard disk space,
and network bandwidth. Consider the prototypical personal computer of the 1980s. We all
remember with excitement the blazing speeds of that 1.2 Kbps modem, the 256 KB RAM and
the near-endless supply of disk space on our 10 MB hard drive. Let’s assume this is our
baseline measurement for a personal computer. Now, let’s apply Moore’s law: where x is the
number of months and P is the present device specification. Specifically, let’s model out the
advances in RAM over a 20-year period. We have. Thinking back to the personal computers
Laptop and PC manufacturers are offering products in the range quite close to what our
Moore’s law model predicts. My own laptop was configured with 1TB (SSD) in disk space
and 32 GB of memory (RAM), and it simply goes to show that we are steadily following the
same trajectory that Moore predicted over 37 years ago. I am convinced Moore’s law can be
12 | P a g e
applied to smartphone technology today and can be used to predict smartphone performance
into the future. The pace at which Apple and Samsung continue to improve their respective
Today, cable companies are offering download speed of 300 Mbps and they are not too far
off from providing 1.2 Gbps on cable infrastructure. Verizon FiOS promises even faster
speeds than that of traditional cable offerings. In many ways, I believe that bandwidth is the
2.4 CONCLUSION
Moore’s law has proved to be a remarkably accurate tool to predict technological advances
throughout the years. The law predicts that by the year 2020, bandwidth will be available at
rates of close to 1.2 Gbps. Bandwidth be of the utmost importance as Japan hosts the 2020
meanwhile US-based content providers struggle to provide 4K content. In fact, even 1080i/p
content is only just now becoming attainable by many in the country. This goes to show how
far behind the US is with respect to its bandwidth delivery capabilities. Indeed, although it is
certainly possible in certain areas of the country to achieve the speeds that Moore’s law
predicted we would by now, the vast majority of the country has no such option. The US is
not alone in their sluggish delivery capabilities, however – Australia ranks among the slowest
providers in the developed world. Such countries should look to Asian countries like South
Korea and Japan who boast some of the fastest speeds in the world. With modern advances
such as CMTS 4.0 it is very feasible that cable companies could provide bandwidth far
greater than of 1.2 Gbps supporting 4K and even 8K video at extremely high QoS for their
customers. In my opinion, bandwidth will be the battleground of the 21st century. We are
beginning to see this play out with the expansion of fibre deployments and M&A deals
between telecommunications and cable companies. As we look towards 2020 we can at least
13 | P a g e
be comforted by the knowledge that Moore’s law has held true for close to four decades and
Cryogenic means low temperature. The word itself refers to the technology of sub-zero
temperatures. Cryogenic engines use liquid oxygen as the oxidizer and liquid hydrogen as the
fuel. As it is known Oxygen can be kept in the liquid state below – 183 degrees Celcius,
while hydrogen requires temperature below – 253-degree Celcius to be in liquid form. Since
liquid oxygen is extremely reactive and combustible it can be used as a propellant to carry
heavy loads.
It is the study of production and behaviour of materials at extremely low temperatures that is
below 150 degrees. It is useful for lifting things in space, storing medicines and drugs at low
temperatures etc. It is used in the last stage of speed launch vehicles, SPVs. It also states that
the cryogenic stage is the liquid propellant stage or solid propellant stage at extremely low
temperatures.
advantages.
1. Crucial for the advancement of the Space programme – Cryogenic Engine is used by ISRO
2. Lighter weight - High energy per unit mass is released which makes it economical
3. Missile Programme for the Défense- Cryogenic technology is useful for the development of
14 | P a g e
4. Clean technology - Cryogenic technology uses Hydrogen and oxygen as fuel and releases
water as a by-product. This is one of its greatest achievements as no pollution is caused by its
use
5. India rises as a Space power- Earlier India was refused to be helped with technology by other
countries. Only the US, Japan, France, Russia & China had this technology. Now India stands
3.4 CONCLUSION:
Cryogenic Technology and Applications describes the need for smaller cryo-coolers as a
result of the advances in the miniaturization of electrical and optical devices and the need for
cooling and conducting efficiency. Cryogenic technology deals with materials at low
temperatures and the physics of their behaviour at these temps. The book demonstrates the
ongoing new applications being discovered for cryo-cooled electrical and optical sensors and
scientific fields as well as in the aerospace and military industries. This book summarizes the
space and military systems. Cryogenic cooling plays an important role in unmanned aerial
vehicle systems, infrared search and track sensors, missile warning receivers, satellite
tracking systems, and a host of other commercial and military systems. Examples of this
group are nitrogen, helium, neon, argon and krypton. Flammable Gases: Some cryogenic
liquids produce a gas that can burn in air. The most common examples are hydrogen,
15 | P a g e
4.0 THE COMPONENTS OF CLOUD COMPUTING SERVICE MODELS AND
CLOUS DELIVERY MODELS
Companies are experiencing an unprecedented burden on their IT infrastructure as they
struggle to meet growing customer expectations for fast, reliable, and secure services. As they
try to increase the processing power and storage capabilities of their IT systems, often these
companies find that the development and maintenance of a robust, scalable, and secure IT
Luckily, there is another option; instead of acquiring extra hardware, your company can
services. Cloud-based providers often offer services such as software, storage, and processing
at affordable prices. In fact, your company can save up to 30% by implementing a cloud-
based solution.
Cloud computing is offered in three different service models which each satisfy a unique set
of business requirements. These three models are known as Software as a Service (SaaS),
by your company, but by the software provider. This relieves your organization from the
availability, and all the other operational issues involved with keeping applications up and
running. SaaS billing is typically based on factors such as number of users, usage time,
amount of data stored, and number of transactions processed. This service model has the
largest market share in cloud computing; according to Gartner, its sales will reach 117 billion
USD by the year 2021. Current applications for SaaS include Field Service solutions, system
16 | P a g e
monitoring solutions, schedulers and more. This cloud computing solution involves the
deployment of software over the internet to various businesses who pay via subscription or a
pay-per-use model. It is a valuable tool for CRM and for applications that need a lot of web
or mobile access – such as mobile sales management software. SaaS is managed from a
central location so businesses don’t have to worry about maintaining it themselves and is
4.12 PAAS
Platform as a Service is halfway between Infrastructure as a Service (IaaS) and Software as a
Service (SaaS). It offers access to a cloud-based environment in which users can build and
deliver applications without the need of installing and working with IDEs (Integrated
Development Environments, which are often very expensive. Additionally, users can often
customize the features they want included with their subscription. According to Gartner, PaaS
has the smallest market share of the three service models, with a projected revenue of 27
billion USD by the year 2021[2]. In today’s market, PaaS providers offer applications such as
Microsoft Azure (also IaaS), Google App Engine, and Apache Stratos. This is where cloud
computing providers deploy the infrastructure and software framework, but businesses can
develop and run their own applications. Web applications can be created quickly and easily
via PaaS, and the service is flexible and robust enough to support them. PaaS solutions are
scalable and ideal for business environments where multiple developers are working on a
single project. It is also handy for situations where an existing data source (such as CRM
4.13 IAAS
Infrastructure as a service offers a standardized way of acquiring computing capabilities on
demand and over the web. Such resources include storage facilities, networks, processing
power, and virtual private servers. These are charged under a “pay as you go” model where
17 | P a g e
you are billed by factors such as how much storage you use or the amount of processing
power you consume over a certain timespan. In this service model, customers do not need to
and availability. According to Gartner, this service model is forecasted to grow by 35.9% in
2018[2]. IaaS services offered today, include Google Cloud Platform and Amazon EC2. This
is the most common service model of cloud computing as it offers the fundamental
infrastructure of virtual servers, network, operating systems and data storage drives. It allows
for the flexibility, reliability and scalability that many businesses seek with the cloud and
removes the need for hardware in the office. This makes it ideal for small and medium sized
infrastructure.
Cloud computing has been around for quite some time now; however, it will continue to
evolve as faster and more reliable networks offer increased benefits to service providers and
consumers alike. With these advancements, there are growing opportunities to develop
Open Smartflex is the only holistic CIS solution that spans across the whole business
lifecycle of Smart Utilities and runs on any service model and any cloud provider. It has a
Customer Information System (CIS) at its core and has been extended with superior
capabilities in four dimensions: on the metering side, with Meter Data Management (MDM)
features; on the customer side with Customer relationship management (CRM) with digital
customer engagement features such as self-service portal; on the field dimension with Mobile
Workforce Management features; and, finally, with the Analytics dimension, all of them
streamlined for mobility. Cloud computing is a broad term which refers to a collection of
18 | P a g e
services that offer businesses a cost-effective solution to increase their IT capacity and
functionality.
Depending on their specific requirements, businesses can choose where, when and how they
Below we explore the different types of cloud computing, including the three main
deployment models.
Businesses can choose to run applications on public, private or hybrid clouds – depending on
4.21Public Cloud
A public cloud environment is owned by an outsourced cloud provider and is accessible to
many businesses through the internet on a pay-per-use model. This deployment model
provides services and infrastructure to businesses who want to save money on IT operational
costs, but it’s the cloud provider who is responsible for the creation and maintenance of the
resources.
Public clouds are ideal for small and medium sized businesses with a tight budget requiring a
• Easy scalability
• No geographical restrictions
• Cost effective
• Highly reliable
19 | P a g e
• Easy to manage
4.22Private Cloud
This cloud deployment model is a bespoke infrastructure owned by a single business. It offers
a more controlled environment in which access to IT resources is more centralised within the
business. This model can be externally hosted or can be managed in-house. Although private
cloud hosting can be expensive, for larger businesses it can offer a higher level of security
and more autonomy to customise the storage, networking and compute components to
4.23Hybrid Cloud
For businesses seeking the benefits of both private and public cloud deployment models, a
hybrid cloud environment is a good option. By combining the two models, a hybrid cloud
model provides a more tailored IT solution that meets specific business requirements.
• Cost effective
• Enhanced security
20 | P a g e
Cons of a hybrid cloud
• Communication in network level may be conflicted as it’s used in both private and
public clouds.
21 | P a g e