Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

FEDERAL UNIVERSITY OF TECHNOLOGY, OWERRI

P.M.B 1526 OWERRI, IMO STATE.

A TERM PAPER ON:

THE SCALES OF INTEGRATION IN SEMICONDUCTOR DIGITAL IC

MOORE’S LAW AND ITS MODERN APPLICATION IN COMPUTER

INDUSTRY

CRYOGENIC TECHNOLOGY (COOLING TECH)

THE COMPONENTS OF CLOUD COMPUTING SERVICE MODELS AND

CLOUS DELIVERY MODELS.

WRITTEN BY

MBEBIE ELVIS CHIEMELIE

20151012336T

MECHATRONICS ENGINEERING DEPARTMENT

SCHOOL OF ELECTRICAL SYSTEMS AND ENGINEERING

TECHNOLOGY

SUBMITTED TO

DR OKAFOR C. KENNEDY

NOVEMBER,2021

1|P a ge
1.0 THE SCALES OF INTEGRATION IN SEMICOBDUCTOR
DIGITAL IC
An integrated circuit or monolithic integrated circuit (also referred to as an IC, a chip, or

a microchip) is a set of electronic circuits on one small flat piece (or "chip")

of semiconductor material, usually silicon. Large numbers of tiny MOSFETs (metal–oxide–

semiconductor field-effect transistors) integrate into a small chip. This results in circuits that

are orders of magnitude smaller, faster, and less expensive than those constructed of

discrete electronic components. The IC's mass production capability, reliability, and building-

block approach to integrated circuit design has ensured the rapid adoption of standardized ICs

in place of designs using discrete transistors. ICs are now used in virtually all electronic

equipment and have revolutionized the world of electronics. Computers, mobile phones, and

other digital home appliances are now inextricable parts of the structure of modern societies,

made possible by the small size and low cost of ICs such as modern computer

processors and microcontrollers.

Integrated circuits were made practical by technological advancements in metal–oxide–

silicon (MOS) semiconductor device fabrication. Since their origins in the 1960s, the size,

speed, and capacity of chips have progressed enormously, driven by technical advances that

fit more and more MOS transistors on chips of the same size – a modern chip may have many

billions of MOS transistors in an area the size of a human fingernail. These advances, roughly

following Moore's law, make computer chips of today possess millions of times the capacity

and thousands of times the speed of the computer chips of the early 1970s.

ICs have two main advantages over discrete circuits: cost and performance. Cost is low

because the chips, with all their components, are printed as a unit by photolithography rather

than being constructed one transistor at a time. Furthermore, packaged ICs use much less

material than discrete circuits. Performance is high because the IC's components switch

2|P a ge
quickly and consume comparatively little power because of their small size and proximity.

The main disadvantage of ICs is the high cost to design them and fabricate the

required photomasks. This high initial cost means ICs are only commercially viable

when high production volumes are anticipated.

1.1 Digital circuits


Traditional small-scale integration SSI, and MSI logic circuits – originally supplied in 0.3 in.

width packages with up to 16 (later, 18, 22 or more) pins – have long ago migrated to the SO

package and even smaller packs. LSI devices with up to 64 or 68 pins came in 0.6 in. wide

packs, but then migrated to a variety of package types, including leaded and leadless chip

carriers, J lead packs, pin grid arrays, etc., with the latest development being ball pin arrays.

But processors, DSP chips and the like tend to require so many lead outs that they hardly

come under the heading of tiny devices, even though truly small considering the number of

pins. In addition to processors, DSP chips, etc., package types with a large number of pins are

also used for custom- and semi-custom logic devices, and programmable arrays of various

types. These enable all the logic functions associated with a product to be swept up into a

single device, reducing the size and cost of products which are produced in huge quantities.

But this approach is not without its drawbacks, often leading to practical difficulties at the

layout stage. For example, on a ‘busy’ densely packed board, the odd logic function such as

an inverter, AND gate or whatever, may be required at the opposite end of the board from

that at which the huge do-it-all logic package is situated.

1.2 TYPES
Integrated circuits can be classified into analog, digital and mixed signal, consisting of analog

and digital signalling on the same IC.

3|P a ge
Digital integrated circuits can contain billions of logic gates, flip-flops, multiplexers, and

other circuits in a few square millimetres. The small size of these circuits allows high speed,

low power dissipation, and reduced manufacturing cost compared with board-level

integration. These digital ICs, typically microprocessors, DSPs, and microcontrollers,

use Boolean algebra to process "one" and "zero" signals.

The die from an Intel 8742, an 8-bit NMOS microcontroller that includes a CPU running at
12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip
Among the most advanced integrated circuits are the microprocessors or "cores", used in

personal computers, cell-phones, microwave ovens, etc. Digital memory

chips and application-specific integrated circuits (ASICs) are examples of other families of

integrated circuits.

In the 1980s, programmable logic devices were developed. These devices contain circuits

whose logical function and connectivity can be programmed by the user, rather than being

fixed by the integrated circuit manufacturer. This allows a chip to be programmed to do

various LSI-type functions such as logic gates, adders and registers. Programmability comes

in various forms – devices that can be programmed only once, devices that can be erased and

then re-programmed using UV light, devices that can be (re)programmed using flash

memory, and field-programmable gate arrays (FPGAs) which can be programmed at any

time, including during operation. Current FPGAs can (as of 2016) implement the equivalent

of millions of gates and operate at frequencies up to 1 GHz.

Analog ICs, such as sensors, power management circuits, and operational amplifiers (op-

amps), process continuous signals, and perform analog functions such as amplification, active

filtering, demodulation, and mixing.

ICs can combine analog and digital circuits on a chip to create functions such as analog-to-

digital converters and digital-to-analog converters. Such mixed-signal circuits offer smaller

4|P a ge
size and lower cost but must account for signal interference. Prior to the late

1990s, radios could not be fabricated in the same low-cost CMOS processes as

microprocessors. But since 1998, radio chips have been developed using RF

CMOS processes. Examples include Intel's DECT cordless phone, or 802.11 (Wi-Fi) chips

created by Atheros and other companies.

Modern electronic component distributors often further sub-categorize integrated circuits:

• Digital ICs are categorized as logic ICs (such

as microprocessors and microcontrollers), memory chips (such as MOS

memory and floating-gate memory), interface ICs (level shifters, serializer/deserializer,

etc.), power management ICs, and programmable devices.

• Analog ICs are categorized as linear integrated circuits and RF circuits (radio

frequency circuits).

• Mixed-signal integrated circuits are categorized as data acquisition ICs (including A/D

converters, D/A converters, digital potentiometers), clock/timing ICs, switched

capacitor(SC) circuits, and RF CMOS circuits.

• Three-dimensional integrated circuits (3D ICs) are categorized into through-silicon

via (TSV) ICs and Cu-Cu connection ICs.

1.3 GENERATION
In the early days of simple integrated circuits, the technology's large scale limited each chip

to only a few transistors, and the low degree of integration meant the design process was

relatively simple. Manufacturing yields were also quite low by today's standards. As metal–

oxide–semiconductor (MOS) technology progressed, millions and then billions of MOS

transistors could be placed on one chip, and good designs required thorough planning, giving

rise to the field of electronic design automation, or EDA. Some SSI and MSI chips,

5|P a ge
like discrete transistors, are still mass-produced, both to maintain old equipment and build

new devices that require only a few gates. The 7400 series of TTL chips, for example, has

become a de facto standard and remains in production.

Transistor Logic gates


Acronym Name Year
count number

SSI small-scale integration 1964 1 to 10 1 to 12

medium-scale
MSI 1968 10 to 500 13 to 99
integration

LSI large-scale integration 1971 500 to 20 000 100 to 9999

very large-scale 20 000 to 1 000


VLSI 1980 10 000 to 99 999
integration 000

100 000 and more


ultra-large-scale 1 000 000 and
ULSI 1984
integration more

1.31 Small-scale integration (SSI)


The first integrated circuits contained only a few transistors. Early digital circuits containing

tens of transistors provided a few logic gates, and early linear ICs such as the Plessey SL201

or the Philips TAA320 had as few as two transistors. The number of transistors in an

integrated circuit has increased dramatically since then. The term "large scale integration"

(LSI) was first used by IBM scientist Rolf Landauer when describing the theoretical

concept; that term gave rise to the terms "small-scale integration" (SSI), "medium-scale

6|P a ge
integration" (MSI), "very-large-scale integration" (VLSI), and "ultra-large-scale integration"

(ULSI). The early integrated circuits were SSI.

SSI circuits were crucial to early aerospace projects, and aerospace projects helped inspire

development of the technology. Both the Minuteman missile and Apollo program needed

lightweight digital computers for their inertial guidance systems. Although the Apollo

Guidance Computer led and motivated integrated-circuit technology, it was the Minuteman

missile that forced it into mass-production. The Minuteman missile program and various

other United States Navy programs accounted for the total $4 million integrated circuit

market in 1962, and by 1968, U.S. Government spending on space and defense still

accounted for 37% of the $312 million total production.

The demand by the U.S. Government supported the nascent integrated circuit market until

costs fell enough to allow IC firms to penetrate the industrial market and eventually

the consumer market. The average price per integrated circuit dropped from $50.00 in 1962

to $2.33 in 1968. Integrated circuits began to appear in consumer products by the turn of the

1970s decade. A typical application was FM inter-carrier sound processing in television

receivers.

The first application MOS chips were small-scale integration (SSI)

chips. Following Mohamed M. Atalla's proposal of the MOS integrated circuit chip in 1960,

the earliest experimental MOS chip to be fabricated was a 16-transistor chip built by Fred

Heiman and Steven Hofstein at RCA in 1962. The first practical application of MOS SSI

chips was for NASA satellites.

1.32 Medium-scale integration (MSI)


The next step in the development of integrated circuits introduced devices which contained

hundreds of transistors on each chip, called "medium-scale integration" (MSI).

7|P a ge
MOSFET scaling technology made it possible to build high-density chips. By 1964, MOS

chips had reached higher transistor density and lower manufacturing costs than bipolar chips.

In 1964, Frank Wanlass demonstrated a single-chip 16-bit shift register he designed, with a

then-incredible 120 MOS transistors on a single chip. The same year, General

Microelectronics introduced the first commercial MOS integrated circuit chip, consisting of

120 p-channel MOS transistors. It was a 20-bit shift register, developed by Robert Norman

and Frank Wanlass. MOS chips further increased in complexity at a rate predicted by Moore's

law, leading to chips with hundreds of MOSFETs on a chip by the late 1960s.

1.33 Large-scale integration (LSI)


Further development, driven by the same MOSFET scaling technology and economic factors,

led to "large-scale integration" (LSI) by the mid-1970s, with tens of thousands of transistors

per chip.

The masks used to process and manufacture SSI, MSI and early LSI and VLSI devices (such

as the microprocessors of the early 1970s) were mostly created by hand, often

using Rubylith-tape or similar. For large or complex ICs (such as memories or processors),

this was often done by specially hired professionals in charge of circuit layout, placed under

the supervision of a team of engineers, who would also, along with the circuit designers,

inspect and verify the correctness and completeness of each mask.

Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that

began to be manufactured in moderate quantities in the early 1970s, had under 4,000

transistors. True LSI circuits, approaching 10,000 transistors, began to be produced around

1974, for computer main memories and second-generation microprocessors.

8|P a ge
1.34 Very-large-scale integration (VLSI)
1.35 ULSI, WSI, SoC and 3D-IC
To reflect further growth of the complexity, the term ULSI that stands for "ultra-large-scale

integration" was proposed for chips of more than 1 million transistors.[93]

Wafer-scale integration (WSI) is a means of building very large integrated circuits that uses

an entire silicon wafer to produce a single "super-chip". Through a combination of large size

and reduced packaging, WSI could lead to dramatically reduced costs for some systems,

notably massively parallel supercomputers. The name is taken from the term Very-Large-

Scale Integration, the current state of the art when WSI was being developed. [94]

A system-on-a-chip (SoC or SOC) is an integrated circuit in which all the components needed

for a computer or other system are included on a single chip. The design of such a device can

be complex and costly, and whilst performance benefits can be had from integrating all

needed components on one die, the cost of licensing and developing a one-die machine still

outweigh having separate devices. With appropriate licensing, these drawbacks are offset by

lower manufacturing and assembly costs and by a greatly reduced power budget: because

signals among the components are kept on-die, much less power is required (see Packaging).

Further, signal sources and destinations are physically closer on die, reducing the length of

wiring and therefore latency, transmission power costs and waste heat from communication

between modules on the same chip. This has led to an exploration of so-called Network-on-

Chip (NoC) devices, which apply system-on-chip design methodologies to digital

communication networks as opposed to traditional bus architectures.

A three-dimensional integrated circuit (3D-IC) has two or more layers of active electronic

components that are integrated both vertically and horizontally into a single circuit.

Communication between layers uses on-die signalling, so power consumption is much lower

9|P a ge
than in equivalent separate circuits. Judicious use of short vertical wires can substantially

reduce overall wire length for faster operation.

1.4 ICs and IC Families

• The 555 timer IC

• The Operational amplifier

• 7400-series integrated circuits

• 4000-series integrated circuits, the CMOS counterpart to the 7400 series (see

also: 74HC00 series)

• Intel 4004, generally regarded as the first commercially available microprocessor, which

led to the famous 8080 CPU and then the IBM PC's 8088, 80286, 486 etc.

• The MOS Technology 6502 and Zilog Z80 microprocessors, used in many home

computers of the early 1980s

• The Motorola 6800 series of computer-related chips, leading to

the 68000 and 88000 series (used in some Apple computers and in the 1980s

Commodore Amiga series)

• The LM-series of analog integrated circuits.

10 | P a g e
2.0 MOORE’S LAW AND ITS APPLICATION IN MODERN COMPUTING
Moore’s law was an innovative observation that was first conceived by Gordon Moore at

Intel almost four decades ago. It has also shown to be surprisingly accurate over time. In it’s

original form, Moore's law predicted that the number of transistors in a chip would improve

by a factor of two every eighteen months. But the concepts behind Moore’s law can be

applied more broadly to innovation and technology in general. We have seen Moore’s

influence in a number of verticals already. In its generalized form, the rate described in

Moore’s law depends on the specific vertical of interest. In optical networking for example,

the rate of innovation is closer to ten times that of transistors on in chip and in certain speech

recognition technologies, the improvements are at least five times that of Moore’s law for

electronics.

Apple’s Siri and Amazon’s Alexa are prime examples of speech technology innovations.

Both Google and Facebook are also actively working on their own speech recognition

solutions. Apple is rumored to be designing new hardware to cram in to their new smartphone

with applications not only in speech recognition but also artificial intelligence (AI), machine

learning (ML), and the Internet of Everything (IoE).

Moore's law has been around for over 37 years and is still the defacto standard by which we

measure technological advances over time. Microscopic building blocks continue to shrink in

an effort meet the demands of new massively-driven computing applications ranging from

artificial intelligence to autonomous vehicles and even to virtual and augmented reality. It

was not too long ago when integrated circuits measured transistor counts in the hundreds of

thousands. A state-of-the-art chip might have measured in millions – today they measure in

the billions and Moore’s law predicts this trend will only continue.

11 | P a g e
2.1 MOORE’S LAW PROGRESS
There is, of course, a theoretical limit to how small we can shrink a transistor – that limit is

thought to be 5 nm. At that point, we will be forced to move to quantum computing and

Moore’s law will cease to hold true. But that time has not yet arrived. Many researchers are

working on the quantum computing model, not least of which is Peter Shor at MIT. But until

quantum computing evolves beyond its current state of infancy and becomes a viable solution

Moore’s law will continue to hold as we seek to squeeze as much juice out of our current

process as possible – Intel continues to be the leader in this space. It is only through this

relentless persuit that we can power the computational needs of 21st-century applications

such as VR/AR and robotics.

2.2 MOORE’S LAW APPLICATION

Just as we can apply the generalized form of Moore’s law to areas such as speech recognition

and optical networking, we can also model the rate of growth in memory, hard disk space,

and network bandwidth. Consider the prototypical personal computer of the 1980s. We all

remember with excitement the blazing speeds of that 1.2 Kbps modem, the 256 KB RAM and

the near-endless supply of disk space on our 10 MB hard drive. Let’s assume this is our

baseline measurement for a personal computer. Now, let’s apply Moore’s law: where x is the

number of months and P is the present device specification. Specifically, let’s model out the

advances in RAM over a 20-year period. We have. Thinking back to the personal computers

of the 2000’s it’s clear that this is a remarkably accurate prediction!

2.3 CURRENT MODEL

Laptop and PC manufacturers are offering products in the range quite close to what our

Moore’s law model predicts. My own laptop was configured with 1TB (SSD) in disk space

and 32 GB of memory (RAM), and it simply goes to show that we are steadily following the

same trajectory that Moore predicted over 37 years ago. I am convinced Moore’s law can be

12 | P a g e
applied to smartphone technology today and can be used to predict smartphone performance

into the future. The pace at which Apple and Samsung continue to improve their respective

smartphones falls comfortably in line with Moore’s predictions.

Today, cable companies are offering download speed of 300 Mbps and they are not too far

off from providing 1.2 Gbps on cable infrastructure. Verizon FiOS promises even faster

speeds than that of traditional cable offerings. In many ways, I believe that bandwidth is the

technological battleground of the 21st century.

2.4 CONCLUSION
Moore’s law has proved to be a remarkably accurate tool to predict technological advances

throughout the years. The law predicts that by the year 2020, bandwidth will be available at

rates of close to 1.2 Gbps. Bandwidth be of the utmost importance as Japan hosts the 2020

summer Tokyo Olympics. Japan is expected to stream the games at a resolution of 8K

meanwhile US-based content providers struggle to provide 4K content. In fact, even 1080i/p

content is only just now becoming attainable by many in the country. This goes to show how

far behind the US is with respect to its bandwidth delivery capabilities. Indeed, although it is

certainly possible in certain areas of the country to achieve the speeds that Moore’s law

predicted we would by now, the vast majority of the country has no such option. The US is

not alone in their sluggish delivery capabilities, however – Australia ranks among the slowest

providers in the developed world. Such countries should look to Asian countries like South

Korea and Japan who boast some of the fastest speeds in the world. With modern advances

such as CMTS 4.0 it is very feasible that cable companies could provide bandwidth far

greater than of 1.2 Gbps supporting 4K and even 8K video at extremely high QoS for their

customers. In my opinion, bandwidth will be the battleground of the 21st century. We are

beginning to see this play out with the expansion of fibre deployments and M&A deals

between telecommunications and cable companies. As we look towards 2020 we can at least

13 | P a g e
be comforted by the knowledge that Moore’s law has held true for close to four decades and

is likely to hold true for the foreseeable future.

3.0 CRYOGENIC TECHNOLOGY (COOLING TECH)

3.1 What does Cryogenic mean?

Cryogenic means low temperature. The word itself refers to the technology of sub-zero

temperatures. Cryogenic engines use liquid oxygen as the oxidizer and liquid hydrogen as the

fuel. As it is known Oxygen can be kept in the liquid state below – 183 degrees Celcius,

while hydrogen requires temperature below – 253-degree Celcius to be in liquid form. Since

liquid oxygen is extremely reactive and combustible it can be used as a propellant to carry

heavy loads.

3.2 Fundamentals of Cryogenic Technology

It is the study of production and behaviour of materials at extremely low temperatures that is

below 150 degrees. It is useful for lifting things in space, storing medicines and drugs at low

temperatures etc. It is used in the last stage of speed launch vehicles, SPVs. It also states that

the cryogenic stage is the liquid propellant stage or solid propellant stage at extremely low

temperatures.

3.3IMPORTANCE OF CRYOGENIC TECHNOLOGY


Listed below are a few points that elaborate on the need for cryogenic technology and its

advantages.

1. Crucial for the advancement of the Space programme – Cryogenic Engine is used by ISRO

for its GSLV Programme

2. Lighter weight - High energy per unit mass is released which makes it economical

3. Missile Programme for the Défense- Cryogenic technology is useful for the development of

futuristic rocket engines

14 | P a g e
4. Clean technology - Cryogenic technology uses Hydrogen and oxygen as fuel and releases

water as a by-product. This is one of its greatest achievements as no pollution is caused by its

use

5. India rises as a Space power- Earlier India was refused to be helped with technology by other

countries. Only the US, Japan, France, Russia & China had this technology. Now India stands

neck to neck with them

6. Other uses of cytotechnology are in blood banks, food storage etc.

3.4 CONCLUSION:
Cryogenic Technology and Applications describes the need for smaller cryo-coolers as a

result of the advances in the miniaturization of electrical and optical devices and the need for

cooling and conducting efficiency. Cryogenic technology deals with materials at low

temperatures and the physics of their behaviour at these temps. The book demonstrates the

ongoing new applications being discovered for cryo-cooled electrical and optical sensors and

devices, with particular emphasis on high-end commercial applications in medical and

scientific fields as well as in the aerospace and military industries. This book summarizes the

important aspects of cryogenic technology critical to the design and development of

refrigerators, cryo-coolers, and micro-coolers needed by various commercial, industrial,

space and military systems. Cryogenic cooling plays an important role in unmanned aerial

vehicle systems, infrared search and track sensors, missile warning receivers, satellite

tracking systems, and a host of other commercial and military systems. Examples of this

group are nitrogen, helium, neon, argon and krypton. Flammable Gases: Some cryogenic

liquids produce a gas that can burn in air. The most common examples are hydrogen,

methane and liquefied natural gas.

15 | P a g e
4.0 THE COMPONENTS OF CLOUD COMPUTING SERVICE MODELS AND
CLOUS DELIVERY MODELS
Companies are experiencing an unprecedented burden on their IT infrastructure as they

struggle to meet growing customer expectations for fast, reliable, and secure services. As they

try to increase the processing power and storage capabilities of their IT systems, often these

companies find that the development and maintenance of a robust, scalable, and secure IT

infrastructure is prohibitively expensive.

Luckily, there is another option; instead of acquiring extra hardware, your company can

embrace cloud computing. Cloud computing is a rapidly-growing industry which allows

companies to move beyond on-premise IT infrastructure and, instead, rely on internet-based

services. Cloud-based providers often offer services such as software, storage, and processing

at affordable prices. In fact, your company can save up to 30% by implementing a cloud-

based solution.

Cloud computing is offered in three different service models which each satisfy a unique set

of business requirements. These three models are known as Software as a Service (SaaS),

Platform as a Service (PaaS), and Infrastructure as a Service (IaaS).

4.1 CLOUD SERVICES


4.11 SAAS
Software as a Service offers applications that are accessed over the web and are not managed

by your company, but by the software provider. This relieves your organization from the

constant pressure of software maintenance, infrastructure management, network security, data

availability, and all the other operational issues involved with keeping applications up and

running. SaaS billing is typically based on factors such as number of users, usage time,

amount of data stored, and number of transactions processed. This service model has the

largest market share in cloud computing; according to Gartner, its sales will reach 117 billion

USD by the year 2021. Current applications for SaaS include Field Service solutions, system

16 | P a g e
monitoring solutions, schedulers and more. This cloud computing solution involves the

deployment of software over the internet to various businesses who pay via subscription or a

pay-per-use model. It is a valuable tool for CRM and for applications that need a lot of web

or mobile access – such as mobile sales management software. SaaS is managed from a

central location so businesses don’t have to worry about maintaining it themselves and is

ideal for short-term projects.

4.12 PAAS
Platform as a Service is halfway between Infrastructure as a Service (IaaS) and Software as a

Service (SaaS). It offers access to a cloud-based environment in which users can build and

deliver applications without the need of installing and working with IDEs (Integrated

Development Environments, which are often very expensive. Additionally, users can often

customize the features they want included with their subscription. According to Gartner, PaaS

has the smallest market share of the three service models, with a projected revenue of 27

billion USD by the year 2021[2]. In today’s market, PaaS providers offer applications such as

Microsoft Azure (also IaaS), Google App Engine, and Apache Stratos. This is where cloud

computing providers deploy the infrastructure and software framework, but businesses can

develop and run their own applications. Web applications can be created quickly and easily

via PaaS, and the service is flexible and robust enough to support them. PaaS solutions are

scalable and ideal for business environments where multiple developers are working on a

single project. It is also handy for situations where an existing data source (such as CRM

tool) needs to be leveraged.

4.13 IAAS
Infrastructure as a service offers a standardized way of acquiring computing capabilities on

demand and over the web. Such resources include storage facilities, networks, processing

power, and virtual private servers. These are charged under a “pay as you go” model where

17 | P a g e
you are billed by factors such as how much storage you use or the amount of processing

power you consume over a certain timespan. In this service model, customers do not need to

manage infrastructure, it is up to the provider to guarantee the contracted amount of resources

and availability. According to Gartner, this service model is forecasted to grow by 35.9% in

2018[2]. IaaS services offered today, include Google Cloud Platform and Amazon EC2. This

is the most common service model of cloud computing as it offers the fundamental

infrastructure of virtual servers, network, operating systems and data storage drives. It allows

for the flexibility, reliability and scalability that many businesses seek with the cloud and

removes the need for hardware in the office. This makes it ideal for small and medium sized

organisations looking for a cost-effective IT solution to support business growth. IaaS is a

fully outsourced pay-for-use service and is available as a public, private or hybrid

infrastructure.

Cloud computing has been around for quite some time now; however, it will continue to

evolve as faster and more reliable networks offer increased benefits to service providers and

consumers alike. With these advancements, there are growing opportunities to develop

business models in an increasingly-connected economy.

Open Smartflex is the only holistic CIS solution that spans across the whole business

lifecycle of Smart Utilities and runs on any service model and any cloud provider. It has a

Customer Information System (CIS) at its core and has been extended with superior

capabilities in four dimensions: on the metering side, with Meter Data Management (MDM)

features; on the customer side with Customer relationship management (CRM) with digital

customer engagement features such as self-service portal; on the field dimension with Mobile

Workforce Management features; and, finally, with the Analytics dimension, all of them

streamlined for mobility. Cloud computing is a broad term which refers to a collection of

18 | P a g e
services that offer businesses a cost-effective solution to increase their IT capacity and

functionality.

Depending on their specific requirements, businesses can choose where, when and how they

use cloud computing to ensure an efficient and reliable IT solution.

Below we explore the different types of cloud computing, including the three main

deployment models.

4.2 CLOUD DEPLOYMENT MODELS


There are three main types of cloud environment, also known as cloud deployment models.

Businesses can choose to run applications on public, private or hybrid clouds – depending on

their specific requirements.

4.21Public Cloud
A public cloud environment is owned by an outsourced cloud provider and is accessible to

many businesses through the internet on a pay-per-use model. This deployment model

provides services and infrastructure to businesses who want to save money on IT operational

costs, but it’s the cloud provider who is responsible for the creation and maintenance of the

resources.

Public clouds are ideal for small and medium sized businesses with a tight budget requiring a

quick and easy platform in which to deploy IT resources.

Pros of a public cloud

• Easy scalability

• No geographical restrictions

• Cost effective

• Highly reliable

19 | P a g e
• Easy to manage

Cons of a public cloud

• Not considered the safest option for sensitive data

4.22Private Cloud
This cloud deployment model is a bespoke infrastructure owned by a single business. It offers

a more controlled environment in which access to IT resources is more centralised within the

business. This model can be externally hosted or can be managed in-house. Although private

cloud hosting can be expensive, for larger businesses it can offer a higher level of security

and more autonomy to customise the storage, networking and compute components to

suit their IT requirements.

Pros of a private cloud

• Improved level of security


• Greater control over the server
• Customisable

Cons of a private cloud

• Harder to access data from remote locations


• Requires IT expertise

4.23Hybrid Cloud
For businesses seeking the benefits of both private and public cloud deployment models, a

hybrid cloud environment is a good option. By combining the two models, a hybrid cloud

model provides a more tailored IT solution that meets specific business requirements.

Pros of a hybrid cloud

• Highly flexible and scalable

• Cost effective

• Enhanced security

20 | P a g e
Cons of a hybrid cloud

• Communication in network level may be conflicted as it’s used in both private and

public clouds.

21 | P a g e

You might also like