A Brief Computer History3

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 92

A BRIEF COMPUTER HISTORY

The computer as we know it today had its beginning with a 19th century English
mathematics professor name Charles Babbage.
He designed the Analytical Engine and it was this design that the basic framework
of the computers of today are based on.

Generally speaking, computers can be classified into three generations. Each


generation lasted for a certain period of
time,and each gave us either a new and improved computer or an improvement to
the existing computer.

First generation: 1937 – 1946 - In 1937 the first electronic digital computer was
built by Dr. John V. Atanasoff and Clifford Berry. It was called the Atanasoff-Berry
Computer (ABC). In 1943 an electronic computer name the Colossus was built for
the military. Other developments continued until in 1946 the first general– purpose
digital computer, the Electronic Numerical Integrator and Computer (ENIAC) was
built. It is said that this computer weighed 30 tons, and had 18,000 vacuum tubes
which was used for processing. When this computer was turned on for the first
time lights dim in sections of Philadelphia. Computers of this generation could only
perform single task, and they had no operating system.

Second generation: 1947 – 1962 - This generation of computers used transistors


instead of vacuum tubes which were more reliable. In 1951 the first computer for
commercial use was introduced to the public; the Universal Automatic Computer
(UNIVAC 1). In 1953 the International Business Machine (IBM) 650 and 700 series
computers made their mark in the computer world. During this generation of
computers over 100 computer programming languages were developed,
computers had memory and operating systems. Storage media such as tape and
disk were in use also were printers for output.

Third generation: 1963 - present - The invention of integrated circuit brought us


the third generation of computers. With this invention computers became smaller,
more powerful more reliable and they are able to run many different programs at
the same time. In1980 Microsoft Disk Operating System (MS-Dos) was born and in
1981 IBM introduced the personal computer (PC) for home and office use. Three
years later Apple gave us the Macintosh computer with its icon driven interface and
the 90s gave us Windows operating system.
As a result of the various improvements to the development of the computer we
have seen the computer being used in all areas of life. It is a very useful tool that
will continue to experience new development as time passes.

Classification of Computers | Type of Computer

Computers differ based on their data processing abilities. They are classified
according to purpose, data handling and functionality.

According to purpose, computers are either general purpose or specific


purpose. General purpose computers are designed to perform a range of
tasks. They have the ability to store numerous programs, but lack in speed and
efficiency. Specific purpose computers are designed to handle a specific problem
or to perform a specific task. A set of instructions is built into the machine.

According to data handling, computers are analog, digital or hybrid. Analog


computers work on the principle of measuring, in which the measurements
obtained are translated into data. Modern analog computers usually employ
electrical parameters, such as voltages, resistances or currents, to represent the
quantities being manipulated. Such computers do not deal directly with the
numbers. They measure continuous physical magnitudes. Digital computers are
those that operate with information, numerical or otherwise, represented in a
digital form. Such computers process data into a digital value (in 0s and 1s). They
give the results with more accuracy and at a faster rate. Hybrid
computers incorporate the measuring feature of an analog computer and counting
feature of a digital computer. For computational purposes, these computers use
analog components and for storage, digital memories are used.

According to functionality, Type of computers are classified as :

Analog Computer

An analog computer (spelt analogue in British English) is a form of computer that


uses continuous physical phenomena such as electrical, mechanical, or hydraulic
quantities to model the problem being solved.

Digital Computer

A computer that performs calculations and logical operations with quantities


represented as digits, usually in the binary number system
Hybrid Computer (Analog + Digital)

A combination of computers those are capable of inputting and outputting in both


digital and analog signals. A hybrid computer system setup offers a cost effective
method of performing complex simulations.

On the basis of Size: Type of Computer

Super Computer

The fastest and most powerful type of computer Supercomputers are very
expensive and are employed for specialized applications that require immense
amounts of mathematical calculations. For example, weather forecasting requires
a supercomputer. Other uses of supercomputers include animated graphics, fluid
dynamic calculations, nuclear energy research, and petroleum exploration.

The chief difference between a supercomputer and a mainframe is that a


supercomputer channels all its power into executing a few programs as fast as
possible, whereas a mainframe uses its power to execute many programs
concurrently.

Mainframe Computer

A very large and expensive computer capable of supporting hundreds, or even


thousands, of users simultaneously. In the hierarchy that starts with a
simple microprocessor (in watches, for example) at the bottom and moves to
supercomputers at the top, mainframes are just below supercomputers. In some
ways, mainframes are more powerful than supercomputers because they support
more simultaneous programs. But supercomputers can execute a single program
faster than a mainframe.

Mini Computer

A midsized computer. In size and power, minicomputers lie


between workstations and mainframes. In the past decade, the distinction
between large minicomputers and small mainframes has blurred, however, as has
the distinction between small minicomputers and workstations. But in general, a
minicomputer is a multiprocessing system capable of supporting from 4 to about
200 users simultaneously.
Micro Computer or Personal Computer

• Desktop Computer: a personal or micro-mini computer sufficient to fit on a desk.

• Laptop Computer: a portable computer complete with an integrated screen and


keyboard. It is generally smaller in size than a desktop computer and larger than a
notebook computer.

• Palmtop Computer/Digital Diary /Notebook /PDAs: a hand-sized computer.


Palmtops have no keyboard but the screen serves both as an input and output
device.

Workstations

A terminal or desktop computer in a network. In this context, workstation is just a


generic term for a user's machine (client machine) in contrast to a "server" or
"mainframe."

Computer Components

All the different pieces of electrical hardware that join together to make up the
complete computer system.

What is a computer system?

Components form the complete computer system. A computer system is made up


of 4 main types of components:

 Input Devices (keyboard, mouse etc)

 Output Devices (monitor, speakers etc)

 Secondary Storage Devices (hard disk drive, CD/DVD drive etc)

 Processor and Primary Storage Devices (cpu, RAM)

Features of Internal Hardware Computer Components


Internal computer components are designed to fit INSIDE the computer system and
they all carry out important roles. We will discuss the following:

 Motherboard

 Processor (central processing unit)

 Internal Memory (RAM and ROM)

 Video Card (aka graphics card)

 Sound Card

 Internal Hard Disk Drive

1. Motherboard

The motherboard is central to any computer system.

 All components plug into the motherboard either directly (straight into the
circuit board) or indirectly (via USB ports).

 Once connected to the motherboard, the components can work together to


form the computer system.

 Components communicate and send signals to each other via the BUS
Network.
2. Processor (CPU / Central Processing Unit)

The Central Processing Unit (CPU) is the brain of the computer.

 The CPU 'controls' what the computer does and is responsible for performing
calculations and data processing. It also handles the movement of data to
and from system memory.

 CPU's come in a variety of speeds which are known as 'clock rates'. Clock
rates are measured in 'Hertz'. Generally, the faster the clock rate, the faster
the performance of the computer.

 There are two main brands of CPU currently on the market... AMD and Intel
3. Internal Memory (RAM and ROM)

There are two types of internal memory - RAM and ROM.

 RAM and ROM are used to store computer data and this can be directly
accessed by the CPU.

 RAM and ROM are sometimes referred to as 'Primary Storage'.

RAM (Random Access Memory)

 RAM is used to temporarily store information that is currently in use by the


computer. This can include anything from word documents to videos.

 RAM can be read from and written to and so the information stored in RAM
can change all the time (it depends what tasks you are using the computer
for).

 RAM is a fast memory. Data can be written to and read from RAM very
quickly. RAM is generally measured in GB (Gigabytes).

 RAM is Volatile Memory. This means that information stored in RAM is


deleted as soon as the computer is turned off.
 The more RAM you have installed in your computer -- the faster it can
perform. You can open and use more programs at the same time without
slowing the computer down.

ROM (Read Only Memory)


 ROM is used to permanently store instructions that tell the computer how to
boot (start up). It also loads the operating system (e.g. Windows).

 These instructions are known as the BIOS (Basic input/output system) or the
boot program.

 Information stored in ROM is known as READ ONLY. This means that the
contents of ROM cannot be altered or added to by the user.

 ROM is fast memory. Data stored in ROM can be accessed and read very
quickly.

 ROM is Non-Volatile memory. This means that stored information is not lost
when the computer loses power.

Other examples of ROM include:

DVD/CD ROMS bought in stores containing pre-recorded music and movie files.
These are played back at home but cannot be altered.

ROM in printers which is used to store different font types.


4. Video Card (Graphics card)

Graphics cards are hardware devices that plug into the motherboard and enables
the computer to display images on the monitor.

 Graphics cards usually require the installation of software alongside the


hardware. The software instructs the computer how to use the graphics card
and also allows you to alter settings to change image quality and size.

5. Sound Card
Sound cards are internal hardware devices that plug into the motherboard.

 A sound card's main function is to allow the computer system to produce


sound but they also allow users to connect microphones in order to input
sounds into the computer.

 Sound cards are also useful in the conversion of analogue data into digital
and vice versa.

6. Storage Devices (secondary backing storage)

Secondary storage devices are used to store data that is not instantly needed by
the computer.

 Secondary storage devices permanently store data and programs for as long
as we need. These devices are also used to back-up data in case original
copies are lost or damaged.

 Temporary storage like RAM does not hold data for long periods.

 It is used to store only those programs and files that we are currently working
on.

There are two categories of storage devices:

1. Internal Storage - Internal Hard Disk Drives


2. External Storage - External Hard Disk Drive, Memory Stick etc

External Hardware Computer Components

External computer components connect to a computer system from OUTSIDE. They


are not necessary for the system to function but make our experiences easier or
better. We will discuss the following:

1. Input Devices (used to get data into a computer)

2. Output Devices (used to get information out of a computer)

3. Peripherals
1. Input Devices

Input devices are pieces of hardware that get raw data into the computer ready for
processing.

Processing involves taking raw data and turning it into more useful information.

Input devices fall into two categories:

Manual Input Devices - Need to be operated by a human to input information

Automatic Input Devices - Can input information on their own.


2. Output Devices

When inputted raw data has been processed it becomes usable information.
Output devices are pieces of hardware that send this usable information out of the
computer.

Some output devices send information out temporarily and some send information
out permanently:

 Temporary Output Devices - E.g. Monitors which constantly refresh the


outputted image on the screen

 Permanent Output Devices - E.g. Printers which output information onto


paper as a hard copy.
3. Peripheral Devices

Almost all input and output devices are known as 'Peripheral devices'.

These are 'non-essential' hardware components that usually connect to the system
externally.

Peripherals are called non-essential because the system can operate without them.

Storage device

Alternatively referred to as digital storage, storage, storage media, or storage medium, a storage device
is any hardware capable of holding information either temporarily or permanently.

There are two types of storage devices used with computers: a primary storage device, such as RAM,
and a secondary storage device, such as a hard drive. Secondary storage can be removable, internal, or
external.

Examples of computer storage

Magnetic storage devices

Today, magnetic storage is one of the most common types of storage used with computers. This
technology found mostly on extremely large HDDs or hybrid hard drives.
 Floppy diskette

 Hard drive

 Magnetic strip

 SuperDisk

 Tape cassette

 Zip diskette

Optical storage devices

Another common storage is optical storage, which uses lasers and lights as its method of reading and
writing data.

 Blu-ray disc

 CD-ROM disc

 CD-R and CD-RW disc.

 DVD-R, DVD+R, DVD-RW, and DVD+RW disc.


Flash memory devices

Flash memory has replaced most magnetic and optical media as it becomes cheaper because it is the
more efficient and reliable solution.

 USB flash drive, jump drive, or thumb drive.

 CF (CompactFlash)

 M.2

 Memory card

 MMC

 NVMe

 SDHC Card

 SmartMedia Card

 Sony Memory Stick

 SD card

 SSD

 xD-Picture Card
Online and cloud

Storing data online and in cloud storage is becoming popular as people need to access their data from
more than one device.

 Cloud storage

 Network media

Paper storage

Early computers had no method of using any of the above technologies for storing information and had
to rely on paper. Today, these forms of storage are rarely used or found. In the picture to the right is an
example of a woman entering data to a punch card using a punch card machine.

 OMR

 Punch card
Why is storage needed in a computer?

Without a storage device, a computer cannot save or remember any settings or information and would
be considered a dumb terminal.

Although a computer can run with no storage device, it would only be able to view information unless it
was connected to another computer that had storage capabilities. Even a task such as browsing the
Internet requires information to be stored on your computer.

Why so many different storage devices?

As computers advance, the technologies used to store data do too, right along with higher requirements
for storage space. Because people need more and more space, want it faster, cheaper, and want to take
it with them new technologies have to be invented. When new storage devices are designed, as people
upgrade to those new devices the older devices are no longer needed and stop being used.

For example, when punch cards were first used in early computers, the magnetic media used for floppy
disks was not available. After floppy diskettes had been released, they were replaced by CD-ROM drives,
which were replaced by DVD drives, which have been replaced by flash drives. The first hard disk drive
from IBM cost $50,000, was only 5 MB, big, and cumbersome. Today, we have smartphones that have
hundreds of times the capacity at a much smaller price that we can carry with us in our pocket.

Each advancement of storage devices gives a computer the ability to store more data, as well as save
and access data faster.

What is a storage location?


When saving anything on a computer, it may ask you for a storage location, which is the area where you
would like to save the information. By default, most information is saved to your computer hard drive. If
you want to move the information to another computer, save it to a removable storage device such as a
flash drive.

Which storage devices are used today?

Most of the storage devices mentioned above are no longer used with today's computers. Most
computers today primarily use an SSD to store information and have the options for USB flash drives
and access to cloud storage. Some desktop computers with disc drives use a disc drive that is capable of
reading and writing CDs and DVDs.

What storage device has the largest capacity?

For most computers, the largest storage device is the hard drive or SSD. However, networked computers
may also have access to larger storage with large tape drives, cloud computing, or NAS devices. Below is
a list of storage devices from the smallest capacity to the largest capacity.

1. Punch card

2. Floppy diskette

3. Zip disk

4. CD

5. DVD

6. Blu-ray disc

7. Flash jump drive

8. Hard drive / SSD

9. Tape drive

10. NAS / Cloud Storage

Are storage devices input and output devices?

No. Although these devices do send and receive information, they are not considered an input device or
output device. It is more proper to refer to any device capable of storing and reading information as a
storage device, disk, disc, drive, or media.

How do you access storage devices?

Accessing a storage device on your computer depends on the operating system your computer is using
and how it's being used. For example, with Microsoft Windows, you can use a file manager to access the
files on any storage device. Microsoft Windows uses Explorer as its default file manager. With Apple
computers, Finder is considered the default file manager.

What is the latest storage device?

One of the most recent storage device technologies to be introduced is NVMe with SSDs and cloud
storage also being a recently developed storage devices. Also, older technologies like hard disk drives
and tape drives are always developing new techniques to allow for the devices to store more data.

Number System

Number systems are the technique to represent numbers in the computer system architecture, every
value that you are saving or getting into/from computer memory has a defined number system.

Types of Number System

1. Binary Number System

2. Octal Number System

3. Decimal Number System


4. Hexadecimal Number System

1. Binary Number System - A Binary number system has only two digits that are 0 and 1. Every
number (value) represents with 0 and 1 in this number system. The base of binary number
system is 2, because it has only two digits.
2. Octal Number System - Octal number system has only eight (8) digits from 0 to 7. Every number
(value) represents with 0,1,2,3,4,5,6 and 7 in this number system. The base of octal number
system is 8, because it has only 8 digits.
3. Decimal Number System - Decimal number system has only ten (10) digits from 0 to 9. Every
number (value) represents with 0,1,2,3,4,5,6, 7,8 and 9 in this number system. The base of
decimal number system is 10, because it has only 10 digits.

4. Hexadecimal Number System - A Hexadecimal number system has sixteen (16) alphanumeric
values from 0 to 9 and A to F. Every number (value) represents with 0,1,2,3,4,5,6, 7,8,9,A,B,C,D,E
and F in this number system. The base of hexadecimal number system is 16, because it has 16
alphanumeric values. Here A is 10, B is 11, C is 12, D is 13, E is 14 and F is 15.

You probably already know what a number system is - ever hear of binary numbers or hexadecimal
numbers? Simply put, a number system is a way to represent numbers. We are used to using the base-
10 number system, which is also called decimal. Other common number systems include base-16
(hexadecimal), base-8 (octal), and base-2 (binary).

Looking at Base-10

0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11... You’ve counted in base-10 all of your life. Quick, what is 7+5? If you
answered 12, you are thinking in base-10. Let’s take a closer look at what you’ve been doing all these
years without ever thinking about it.

Let’s take a quick look at counting. First, you go through all the digits: 0, 1, 2... Once you hit 9, you have
no more digits to represent the next number. So, you change it back to 0, and add 1 to the tens digit,
giving you 10. The process repeats over and over, and eventually you get to 99, where you can’t make
any larger numbers with two digits, so you add another, giving you 100.

Although that’s all very basic, you shouldn’t overlook what is going on. The right-most digit represents
the number of ones, the next digit represents the number of tens, the next the number of hundreds,
etc.

Visualizing Base-10

Confused by these descriptions? No problem, let’s visualize it instead. Imagine a large number, like
2347. We can represent that with two groups of one thousand, three groups of one hundred, four
groups of ten, and seven individual blocks.

Base-10 Mathematically

You may have noticed a pattern by now. Let’s look at what is going on mathematically, using 2347 as an
example.
As you saw, there are 2 groups of a thousand. Not coincidentally, 1000 = 10*10*10 which can also be
written as 10^3.

There are 3 groups of a hundred. Again, not coincidentally, 100 = 10*10 or 10^2.

There are 4 groups of ten, and, 10 = 10^1.

Finally, there are 7 groups of one, and 1 = 10^0. (That may seem strange, but any number to the power
of 0 equals 1, by definition.)

This is essentially the definition of base-10. To get a value of a number in base-10, we simply follow that
pattern. Here are a few more examples:

892 = 8*10^2+9*10^1+2*10^0

1147 = 1*10^3+1*10^2+4*10^1+7*10^0

53 = 5*10^1+3*10^0

Admittedly, this all seems a little silly. We all know what value a base-10 number is because we always
use base-10, and it comes naturally to us. As we’ll see soon, though, if we understand the patterns in the
background of base-10, we can understand other bases better.

Binary Number System

A Binary Number is made up of only 0s and 1s. 110100 is an example of a Binary Number

There is no 2, 3, 4, 5, 6, 7, 8 or 9 in Binary! A "bit" is a single binary digit. The number above has 6 bits.

How do we Count using Binary?

It is just like counting in decimal except we reach 10 much sooner.

Well how do we count in Decimal?

0 Start at 0

... Count 1,2,3,4,5,6,7,8, and then...


9 This is the last digit in Decimal

10 So we start back at 0 again, but add 1 on the left

The same thing is done in binary ...

What happens in Decimal?

99 When we run out of digits, we ...

100 ... start back at 0 again, but add 1 on the left

And that is what we do in binary ...


Decimal vs Binary

Here are some equivalent values:

Here are some larger values:

21 = 10101

22 = 10110

23 = 10111

24 = 11000

31 = 11111

32 = 100000

In the Decimal System there are Ones, Tens, Hundreds, etc

In Binary there are Ones, Twos, Fours, etc, like this:


Example: 10.1

The "10" means 2 in decimal,

The ".1" means half,

So "10.1" in binary is 2.5 in decimal

The word binary comes from "Bi-" meaning two. We see "bi-" in words such as "bicycle" (two wheels) or
"binocular" (two eyes).

When you say a binary number, pronounce each digit (example, the binary number "101" is spoken as
"one zero one", or sometimes "one-oh-one"). This way people don't get confused with the decimal
number.

A single binary digit (like "0" or "1") is called a "bit".

For example 11010 is five bits long.

The word bit is made up from the words "binary digit"

How to Show that a Number is Binary

To show that a number is a binary number, follow it with a little 2 like this: 1012

This way people won't think it is the decimal number "101" (one hundred and one).
Example: What is 1111 in Decimal?

The "1" on the left is in the "2×2×2" position, so that means 1×2×2×2 (=8)

The next "1" is in the "2×2" position, so that means 1×2×2 (=4)

The next "1" is in the "2" position, so that means 1×2 (=2)

The last "1" is in the ones position, so that means 1

Answer: 1111 = 8+4+2+1 = 15 in Decimal

Example: What is 1001 in Decimal?

The "1" on the left is in the "2×2×2" position, so that means 1×2×2×2 (=8)

The "0" is in the "2×2" position, so that means 0×2×2 (=0)

The next "0" is in the "2" position, so that means 0×2 (=0)

The last "1" is in the ones position, so that means 1

Answer: 1001 = 8+0+0+1 = 9 in Decimal

Example: What is 1.1 in Decimal?

The "1" on the left side is in the ones position, so that means 1.

The 1 on the right side is in the "halves" position, so that means 1×(1/2)

So, 1.1 is "1 and 1 half" = 1.5 in Decimal

What is 10.11 in Decimal?

The "1" is in the "2" position, so that means 1×2 (=2)

The "0" is in the ones position, so that means 0

The "1" on the right of the point is in the "halves" position, so that means 1×(1/2)

The last "1" on the right side is in the "quarters" position, so that means 1×(1/4)

So, 10.11 is 2+0+1/2+1/4 = 2.75 in Decimal

"There are 10 kinds of people in the world,

those who understand binary numbers, and those who don't."


Decimal to Binary (Performing Short Division by Two with Remainder)

1. let's convert the decimal number 156 to binary. Write the decimal number as the dividend inside an
upside-down "long division" symbol. Write the base of the destination system (in our case, "2" for
binary) as the divisor outside the curve of the division symbol.[2]

This method is much easier to understand when visualized on paper, and is much easier for beginners,
as it relies only on division by two.

To avoid confusion before and after conversion, write the number of the base system that you are
working with as a subscript of each number. In this case, the decimal number will have a subscript of 10
and the binary equivalent will have a subscript of 2.

2. Divide. Write the integer answer (quotient) under the long division symbol, and write the remainder
(0 or 1) to the right of the dividend.[3]

Since we are dividing by 2, when the dividend is even the binary remainder will be 0, and when the
dividend is odd the binary remainder will be 1.
3. Continue to divide until you reach 0. Continue downwards, dividing each new quotient by two and
writing the remainders to the right of each dividend. Stop when the quotient is 0.
4. Write out the new, binary number. Starting with the bottom remainder, read the sequence of
remainders upwards to the top. For this example, you should have 10011100. This is the binary
equivalent of the decimal number 156. Or, written with base subscripts: 15610 = 100111002

 This method can be modified to convert from decimal to any base. The divisor is 2 because the desired destination is base 2 (binary).
If the desired destination is a different base, replace the 2 in the method with the desired base. For example, if the desired
destination is base 9, replace the 2 with 9. The final result will then be in the desired base.

Conversion using Descending Powers of Two and Subtraction

1. Start by making a chart. List the powers of two in a "base 2 table" from right to left. Start at 2^0,
evaluating it as "1". Increment the exponent by one for each power. Make the list up until you've
reached a number very near the decimal system number you're starting with. For this example, let's
convert the decimal number 15610 to binary.
2. Look for the greatest power of 2. Choose the biggest number that will fit into the number you are
converting. 128 is the greatest power of two that will fit into 156, so write a 1 beneath this box in your
chart for the leftmost binary digit. Then, subtract 128 from your initial number. You now have 28.
3. Move to the next lower power of two. Using your new number (28), move down the chart marking
how many times each power of 2 can fit into your dividend. 64 does not go into 28, so write a 0 beneath
that box for the next binary digit to the right. Continue until you reach a number that can go into 28.
4. Subtract each successive number that can fit, and mark it with a 1. 16 can fit into 28, so you will write
a 1 beneath its box and subtract 16 from 28. You now have 12. 8 does go into 12, so write a 1 beneath
8's box and subtract it from 12. You now have 4.
5. Continue until you reach the end of your chart. Remember to mark a 1 beneath each number that
does go into your new number, and a 0 beneath those that don't.
6. Write out the binary answer. The number will be exactly the same from left to right as the 1's and 0's
beneath your chart. You should have 10011100. This is the binar/y equivalent of the decimal number
156. Or, written with base subscripts: 15610 = 100111002.[11]

Repetition of this method will result in memorization of the powers of two

How do you convert the fractional part of a decimal to a binary?

If the decimal number has a fractional part, then the fractional parts are converted into binary by
multiplying it by 2. Only the integer part of the result is noted. Repeat the multiplication until the
fractional part becomes 0. Eg. 0.75 is the number we want to convert, so we'll start multiplying it by 2.
0.75 *2=1.50. Here, the fractional part is not 0 so we repeat this until the fractional part becomes 0.
1.50*2=3.00. Now take the integer part of the answer, 3, then convert it into binary. 11 is the binary
form of 3. Then place the decimal point in front of the number, which is .11. Therefore, .11 is the binary
form of .75

How do I convert 0.2663 into binary?

Start by converting 0.2663 into the fraction 2,663 / 10,000 . Use the steps above to convert 2,663 into
the binary number 1010 0110 0111 and 10,000 into 10 0111 0001 0000 . So, now we have a binary
fraction: 1010 0110 0111 / 10 0111 0001 0000 . Divide 1010 0110 0111 by 10 0111 0001 0000 . To do
this, follow the WikiHow article: "How to Divide Binary Numbers". When you divide 1010 0110 0111 by
10 0111 0001 0000 you should get 0.0100 0100 0010... I rounded to the nearest 4,096 th
Base-8

On to base-8, also called octal. Base-8 means just what is sounds like: the system is based on the
number eight (as opposed to ten). Remember how in base-10 we had ten digits? Now, in base-8, we are
limited to only eight digits: 0, 1, 2, 3, 4, 5, 6, and 7. There’s no such thing as 8 or 9.

You should notice a similar pattern to before; after we get to 7, we run out of different digits for any
higher number. We need a way to represent eight of something. So we add another digit, change the 7
back to 0, and end up with 10. Our answer of 10 in base-8 now represents what we would normally
think of as 8 in base-10.

Talking about numbers written in multiple bases can be confusing. For example, as we have just seen, 10
in base-8 is not the same as 10 in base-10. So, from this point on, I’ll use a standard notation where a
subscript denotes the base of numbers if needed. For example, our base-8 version of 10 now looks like
108.

(I find it a lot easier to understand this if I change the way I read these numbers in my head, too. For
example, for 108, I read “octal one-oh” or “one-oh in base-eight”. For 1010 I read “decimal one-oh” or
“one-oh in base-ten”.)

Great, so we know 108 represents eight items. (Always feel free to plug a number into the first tool for a
visualization.) What’s the next number after 778? If you said 1008, you’re correct. We know from what
we’ve learned so far that the first 7 in 778 represents groups of 8, and the second 7 represents
induvidual items. If we add these all up, we have 7*8 + 7*1 = 63. So we have a total of 6310. So 778=6310.
We all know 6410 comes after 6310.

Converting From Base-8 to Base-10

Let’s look at a wordier example now. John offers to give you 478 cookies, and Jane offers to give you 4310
cookies. Whose offer do you take? If you want, go ahead and generate the graphic for 478 graphic with
the first tool. Let’s figure out its base-10 value so we can make the best decision!

As we saw when counting, the four in 478 represents the number of groups of eight. This makes sense -
we are in base-8. So, in total, we have four groups of eight and seven groups of one. If we add these all
up, we get 4*8 + 7*1 = 3910. So, 478 cookies is the exact same as 3910 cookies. Jane’s offer seems like the
best one now!

The pattern we saw before with base-10 holds true here also. We’ll look at 5238. There are five groups
of 82, two groups of 81 and three groups of 80 (remember, 80=1). If we add these all up, 5*82 + 2*81 +
3*80 = 5*64+2*8+3 = 339, we get 33910 which is our final answer. The diagram below shows the same
thing visually:
Here are a couple more examples:

1118 = 1*82+1*81+1*80 = 64+8+1 = 7310

438 = 4*81+3*80 = 32+3 = 3510

61238 = 6*83+1*82+2*81+3*80 = 3072+64+16+3 = 315510

Converting from Base-10 to Base-8

Converting from base-10 to base-8 is a little trickier, but still straightforward. We basically have to
reverse the process from above. Let’s start with an example: 15010.

We first find the largest power of 8 that is smaller than our number. Here, this is 82 or 64 (83 is 512). We
count how many groups of 64 we can take from 150. This is 2, so the first digit in our base-8 number is 2.
We have now accounted for 128 out of 150, so we have 22 left over.

The largest power of 8 that is smaller than 22 is 81 (that is, 8). How many groups of 8 can we take from
22? Two groups again, and thus our second digit is 2.

Finally, we are left with 6, and can obviously take 6 groups of one from this, our final digit. We end up
with 2268.

In fact, we can make this process a touch clearer with math. Here are the steps:
150/82 = 2 remainder 22

22/81 = 2 remainder 6

6/80 = 6

Our final answer is then all of our non-remainder digits, or 226. Notice that we still start by dividing by
the highest power of 8 that is less that our number.

Dealing With Any Base

It’s important to be able to apply the concepts we’ve learned about base-8 and base-10 to any base. Just
as base-8 had eight digits and base-10 had ten digits, any base has the same number of digits as its base.
So base-5 has five digits (0-4), base-7 has seven digits (0-6), etc.

Now let’s see how to find the base-10 value of any number in any base. Say we are working in base-b,
where b can be any positive integer. We have a number d4d3d2d1d0 where each d is a digit in a
number. (The subscripts here don’t refer to the base of the number but simply differentiate each digit.)
Our base-10 value is simply d4*b4 + d3*b3 + d2*b2 + d1*b1 + d0*b0.

Here’s an example: we have the number 32311 in base-4. Notice how our number only has digits from
zero to three since base-4 only has four total digits. Our base-10 value is 3*44 + 2*43 + 3*42 + 1*41 +
1*40 = 3*256 + 2*64 + 3*16 + 1*4 + 1*1 = 949. We could, or course, follow this pattern with any
amount of digits in our number.

Base-16

Base-16 is also called hexadecimal. It’s commonly used in computer programming, so it’s very important
to understand. Let’s start with counting in hexadecimal to make sure we can apply what we’ve learned
about other bases so far.

Since we are working with base-16, we have 16 digits. So, we have 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... and yikes!
We’ve run out of digits, but we still need six more. Perhaps we could use something like a circled 10?

The truth is, we could, but this would be a pain to type. Instead, we simply use letters of the alphabet,
starting with A and continuing to F. Here’s a table with all the digits of base-16:

Base 16 Digit Value

0 0

1 1

2 2

3 3
4 4

5 5

6 6

7 7

8 8

9 9

A 10

B 11

C 12

D 13

E 14

F 15

Other than these extra digits, hexadecimal is just like any other base. For example, let's convert 3D16 to
base-10. Following our previous rules, we have: 3D16 = 3*161 + 13*160 = 48 + 13 = 61. So 3D16 is equal
to 6110. Notice how we use D's value of 13 in our calculation.

We can convert from base-10 to base-16 similar to the way we did with base-8. Let's convert 69610 to
base-16. First, we find the largest power of 16 that is less than 69610. This is 162, or 296. Then:

696/162 = 2 remainder 184

184/161 = 11 remainder 8

8/160 = 8 remainder 0

We have to replace 11 with its digit representation B, and we get 2B816.

Binary (Base-2)
On to the famous base-2, also called binary. While everyone knows binary is made up of 0s and 1s, it is
important to understand that it is no different mathematically than any other base. There’s an old joke
that goes like this:

“There are only 10 types of people in the world: those who understand binary and those who don’t.”

Can you figure out what it means?

Let’s try a few conversions with base-2. First, we’ll convert 1011002 to base-10. We have: 101100 =
1*25 + 1*23 + 1*22 = 32 + 8 + 4 = 4410.

Now let’s convert 65 to binary. 26 is the highest power of 2 less than 65, so:

65/26 = 1 remainder 1

1/25 = 0 remainder 1

1/24 = 0 remainder 1

1/23 = 0 remainder 1

1/22 = 0 remainder 1

1/21 = 0 remainder 1

1/20 = 1 remainder 0

And thus we get our binary number, 1000001.

Understanding binary is super important. I’ve included a table below to point out digits’ values.

For example, the value of 10001 is 17, which is the sum of the values of the two 1 digits (16+1). This is
nothing different than we have done before, its just presented in an easy to read way.

Tricks and Tips

Normally, when converting between two bases that aren’t base-10, you would do something like this:
Convert number to base-10

Convert result to desired base

However, there’s a trick that will let you convert between binary and hexadecimal quickly. First, take any
binary number and divide its digits into groups of four. So, say we have the number 10111012. Divided
up we have 0101 1101. Notice how we can just add extra zeroes to the front of the first group to make
even groups of 4. We now find the value for each group as if it was its own separate number, which
gives us 5 and 13. Finally, we simply use the corresponding hexadecimal digits to write out base-16
number, 5D16.

We can go the other direction also, by converting each hexadecimal digit into four binary digits. Try
converting B716 to binary. You should get 101101112.

This trick works because 16 is a power of 2. What this means is that we use similar trick for base-8,
which is also a power of 2:

Number Bases

Base 10

We use "Base 10" every day ... it is our Decimal Number System.

It has 10 digits:

0 1 2 3 4 5 6 7 8 9

We count like this:


But there are other bases!

Binary (Base 2) has only 2 digits: 0 and 1

We count like this:


Ternary (Base 3) has 3 digits: 0, 1 and 2

We count like this:


Quaternary (Base 4) has 4 digits: 0, 1, 2 and 3

We count like this:


Quinary (Base 5) has 5 digits: 0, 1, 2, 3 and 4

We count like this:

Senary (Base 6) has 6 digits: 0, 1, 2, 3, 4 and 5

We count like this:


Septenary (Base 7) has 7 digits: 0, 1, 2, 3, 4 5 and 6

We count like this:


Octal (Base 8) has 8 digits: 0, 1, 2, 3, 4, 5, 6 and 7

We count like this:


Nonary (Base 9) has 9 digits: 0, 1, 2, 3, 4, 5, 6, 7 and 8

We count like this:


Decimal (Base 10) has 10 digits: 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9

Well ... we talked about this at the start but here it is again:
Undecimal (Base 11)

Undecimal (Base 11) needs one more digit than Decimal, so "A" is used, like this:

Duodecimal (Base 12)

Duodecimal (Base 12) needs two more digits than Decimal, so "A" and "B" are used:
Hexadecimal (Base 16)

Because there are more than 10 digits, hexadecimal is written using letters as well, like this:

Vigesimal (Base 20)

With vigesimal, the convention is that I is not used because it looks like 1, so J=18 and K=19, as in this
table:

the Number Base is also called the Radix

How to Show the Base

To show what base a number has, put the base in the lower right like this:

1012

This shows that is in Base 2 (Binary)

3148

This shows that is in Base 8 (Octal)

Base Conversion of Whole Numbers

Base conversion of whole numbers is fairly easy when we use remainders.

Let's start with an example:

Convert 1208 to base 26

(base 26 is fun because it is the Alphabet)

For simplicity I will use A=1, B=2, etc, (in the style of spreadsheet columns) and use Z for zero, but
another convention for base 26 is to use A=0, B=1, up to Z=25.

Watch this series of divisions (R means remainder, which is ignored in the next division):
1208 / 26 = 46 R 12

46 / 26 = 1 R 20

Now, think about the last answer (1 R 20), it means that 1208/26/26 = 1 (plus bits), in other words it tells
us that we should put a "1" in the "262" column!!!

Next we should put a 20 in the "261" column, and lastly a 12 in the ones.

Why?

Because our first division work has really said that:

1208 = 46 × 26 + 12

So, 12 belongs in the ones column, and from here on we are dealing with the first power of 26:

46 = 1 × 26 + 20 (so 20 belongs in the ×26 column, and we put 1 in the ×26×26 column)

So the answer is:

And if we substitute letters for numbers we get: ATL

Now, let's see if it has worked:

1 × 262 = 676

+ 20 × 261 = 520

+ 12 × 260 = 12

TOTAL: 1208

So, to do whole numbers we do repeated divisions and put the results in from right to left

Note: if we use the A=0 style, then the code ATL is really B__ you figure it out ;)

What happens after the Decimal Point?


Now, if you have followed how to do whole numbers, we can look at "decimals" (hmmm... not an
accurate word because it means Base 10 but you know what I am talking about).

To do "decimals", we use repeated multiplies and build from left to right.

Let us try an example using PI (3.1416...), and convert it to base 26. The whole number part is easy, it
converts into base 26 as 3, so next we move on to the "decimal" part:

.1416 × 26 = 3.6816

.6816 × 26 = 17.7216

.7216 × 26 = 18.7616

etc...

Each time I drop the whole number part and just multiply the fractional part.

Now, the first answer says to put a 3 in the first "decimals" column, the second answer says to put a 17
in the second column etc ..

So the answer is:

And if we substitute letters for numbers we get: C.CQR

As a check I calculated 3 + 3/26 + 17/262 + 18/263 = 3.141556..., and that looks pretty good!

Octal Number System

In early days octal number system was mostly used in minicomputers. The word “OCT” means eight. The
octal number system says that it is a number system of base 8 which means that we require 8 different
symbols in order to represent any number in octal system. The symbols are 0, 1, 2, 3, 4, 5, 6, and 7. The
smallest two digit number in this system is (10)8 which is equivalent to decimal 8.

For example in this number system, the number is written as (352)8. The base should be written as 8
otherwise the number is assumed to be in decimal number system by default. So this thing needs to be
taken care of in writing the number. A little error may result in the change in number system base. The
main advantage of using octal number system is that it can be converted directly to binary in a very easy
manner. As we know, the computer understands only the binary number system, so the conversion
from binary to octal or from octal to binary is quite easier so this number system is used.

As its base is 8 = 23, every symbol of this system can be represented by its three bit binary equivalent.
As every digit of a number in octal system is represented separately by its three bit binary equivalent the
octal system requires one-third of the length as compared to binary numbers. It is basically a positional
weighted number system. The digit positions in octal number system has weight as

Number Conversion

Octal to Binary Conversion

The conversion is done by converting an individual octal digit to binary. Every digit must be converted to
a 3-bit binary number and the resultant will be the binary equivalent of an octal number.

Example

Converting (145.56)8 to binary

This table should be used in order to convert any octal number to binary. From the table, writing binary
equivalent of each of the digit we get-
which is the binary equivalent of the octal number.

Binary to Octal Conversion

The same table can be used in order to convert a binary number to octal. First, group the binary number
into the group of three bits and write the octal equivalent of it.

Example

Octal equivalent of (11001111)2 is

The groups we got here are-

011,001,111. A zero before the number is added in order to complete the grouping in the form of three
binary digits.

Now the octal equivalent of the numbers are-

3, 1, 7. So the octal number we got is (317)8.

Octal to Decimal Conversion

The method of converting an octal number into its decimal equivalent is very simple. Just expand the
number in the base of eight with its positional weight and the resultant will be a decimal number.

Example

Converting (317)8 to its decimal equivalent.

This can be done as follows-

Decimal to Octal Conversion

This can be done by dividing the number by 8 using repeated division method known as double dabble
method. The repeated division is done and the remainder is taken. It can be done as follows-

Example

Find the octal equivalent of 158.


The equivalent number in octal system is (236)8.

When there is a number in fraction or after the decimal point, that can be converted as-

Say we have to convert 0.40 to octal.

So we see that the number is repeated. This will go on and it will be a never ending process so we can
approximate the result as- (.3146…)8

Advantages of Octal Number Systems

 It is of one third length of the binary.

 Easy conversion process from binary to octal and vice-versa.

 Easier to handle input and output in the octal form.

Disadvantages of Octal Number Systems

 Computer does not understand octal number system so there must be a requirement of
additional circuitry known as octal to binary converters before it is applied to a digital system or
a computer.
Hexadecimal Number System

We already know about the decimal number system, binary number system and octal number system.
Like those there is another number system called hexadecimal number system. As the name suggests
there are 16 symbols in this number system starting from 0. Before explaining the number system we
should know why this number system came into existence. The natural tendency of human is to use
decimal number system because they are familiar with this as the use of 0 is very easy and the
operations are user friendly. And the computer systems used binary systems earlier because there are
only two states on and off.

But as the dependency on computer grew up and different mathematical programs and different
softwares needed to be developed there came the need to develop a number system having base larger
than decimal and 16 was chosen because the bits, bytes are multiples of it. Now days this number
system is used in HTML and CSS, hexadecimal notations are used in them. This number system was first
used around 1956 in Bendix G-15 computer.

Now coming to the representation of hexadecimal number system, in this number system there are 16
basic digits by which all the numbers can be represented, these are 0, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F
the first 10 digits are similar to decimal number system but the last 6 digits represent 10, 11, 12, 13, 14
and 15 respectively. Any number in hexadecimal number system can be converted into numbers of
other number system very easily, the procedures are given in the next article.

Here in the decimal system, we use symbol 1 and 0 side by side that is 10 to represent • • • • • • • • • •
+•

That is nine plus one. After that we will have 11 then 12 and so on. That means after nine or 9 we bring
back first non – zero digit of symbol that is 1 at left side and repeat all the symbols from 0 to 9 at its right
side to represent next ten higher numbers from ten to nineteen (10 – 19). After 19 we put 2 at left and
repeat again 0 to 9 to represent next ten higher numbers from twenty to twenty nine (20 – 29). Decimal
number system is very basic number system as ten symbols or digits are used in different combinations
to represent all the numbers, this system is said to be of base ten (10). Now think about a number
system where you are told to use sixteen symbols instead of 10 symbols. Then what will be your basic
construction of the new number system? For that first we have to find out 16 symbols to represent the
basic digits of that new number system.

We can create new series of symbols for that, but if we do so it will be very much difficult to remember.
That difficulty can be solved if we use commonly used symbols for that purpose. So we can simply use 0
to 9 of decimal system to represents first ten digits 0 to 9 of this new number system. But for other 6
higher digits there are no symbols available in decimal system so we have to search for them from some
commonly used system. We can easily get them from our alphabetical system that means we can use A,
B, C, D, E and F as next 6 higher digits (from 10 to 15) in this new number system. The system where
total 16 basic digits are used is known as hexadecimal number system.

In hexadecimal system we use 16 symbols to represent all numbers. These symbols are 0, 1, 2, 3, 4, 5, 6,
7, 8, 9, A, B, C, D, E and F. After F we use 10 for next higher number 16. Then next increment is 11 which
used to represent next natural number 17 and so on.

Hence in hexadecimal system just after F, the first digit becomes 1 and second digit will repeat from 0 to
F one by one to represent natural numbers 16 to 31.

That means, 10 ⇒ 16, 11 ⇒ 17, 12 ⇒ 18, 13 ⇒ 19, 14 ⇒ 20, 15 ⇒ 21, 16 ⇒ 22, 17 ⇒ 23, 18 ⇒ 24, 19 ⇒
25, 1A ⇒ 26, 1B ⇒ 27, 1C ⇒ 28, 1D ⇒ 29, 1E ⇒ 30, 1F ⇒ 31. After this the first digit will increase to 2
and again second digit will repeat from 0 to F one by one to represent natural numbers 32 to 47 and so
on.

Decimal to Hexadecimal Conversion

As we have already stated in the previous articles on number systems that all the number systems are
inter related, so as the decimal and hexadecimal numbers. Any number in decimal number system can
be converted into hexadecimal number system. The procedure is given below.

If we try to understand the procedure with an example and step by step then it will be easier and better
for us.
Let us first take any decimal number suppose we have taken 7510 and now we want to convert it into
hexadecimal number, first we have to divide it by 16.

75/16 = quotient 4, remainder 11

As the quotient is less than 16, we have to stop here and the equivalent hexadecimal number will be

4B8 = 7510

Now we will discuss the method for a slightly bigger number,

Suppose the number is 169310

Now we divide it by 16

1693/16 = quotient = 105, remainder = 13(D)

Now we have to divide the quotient again by 16 and see the result

105/16 = quotient = 6 remainder = 9

As the quotient is less than 16 the calculation part is completed and we can now directly write the result

169310 = 69D16

So the decimal number has been converted into a hexadecimal number.

From above explanation it can be understood that, hexadecimal number is the summation of products
of different digit with their respective multipliers. The multipliers are 160, 161, 162, ……..from right hand
side or list significant bit (LSB). Let’s have an example 4D2 and this would be expressed as

If we divide decimal 1234 by 16, we will get 77 as quotient and 2 as remainder. Then if we divide
decimal 77 by 16, we will get 4 as quotient and 13 or D as remainder. Now if we write side by side from
last quotient to first reminder we will get 4D2 which is hexadecimal or hex equivalent of the number
1234.

Hexadecimal to Decimal Conversion

In a similar way any hexadecimal number can be converted into a decimal number. We will look into the
process with an example.

But before beginning it should be made clear that before conversion of hexadecimal number all the
letters of the number should be taken as their numerical values in decimal number system, i.e. if a digit
in hexadecimal number is A then we have to take it as 10, now an example will make the whole
procedure clear.

Let us take any hexadecimal number 45B116, we have to convert it into decimal number, so starting from
the right most digit we have to start multiplying the digits with ascending power of 16 starting from 0.

So the taken number will be operated as

In this procedure any hexadecimal number can be converted into decimal number.

The value hexadecimal number is determined by multiplying every digit of the hex number by its
respective multiplier. We start from LSB or right most digit and multiply it with 160, then come to the
next digit at left of LSB and multiply it with 161 and after that we come to the further left digit and
multiply 162 with it. We continue this up to MSB or left most bit. The add all this product and finally we
get decimal equivalent of hexadecimal number. This one of the easiest process of hexadecimal to
decimal conversion.

Think about the hex number 4D2. Here the least significant bit of the number is 2 so we will multiply
that with 160 or 1. Then come to the next left digit that is D OR 13 and we will multiply it with 161 or 16.
Lastly we will multiply the left most digit or MSB i.e. 4 with 162. Now if we add these three terms, finally
we will get the dedcimal equivalent of the said hexadecimal number. This is what Hexadecimal to
Decimal Conversion

Hence,

Number System Conversion

Decimal to Other Base System

Steps

 Step 1 − Divide the decimal number to be converted by the value of the new base.

 Step 2 − Get the remainder from Step 1 as the rightmost digit (least significant digit) of new
base number.

 Step 3 − Divide the quotient of the previous divide by the new base.

 Step 4 − Record the remainder from Step 3 as the next digit (to the left) of the new base
number.
 Repeat Steps 3 and 4, getting remainders from right to left, until the quotient becomes zero in
Step 3.

 The last remainder thus obtained will be the Most Significant Digit (MSD) of the new base
number.

In essence to convert to Binary we divide the Decimal Number by 2. To convert to Octal we divide by 8,
to convert to hexadecimal we divide by 16, and for any other number we divide the decimal base
number to the base of the number system we are converting into.

x / n where x is the decimal base number and n is the number system we are converting into.

Example −

1. Decimal Number: 2910

Calculating Binary Equivalent −

Step Operation Result Remainder

Step 1 29 / 2 14 1

Step 2 14 / 2 7 0

Step 3 7/2 3 1

Step 4 3/2 1 1

Step 5 1/2 0 1

As mentioned in Steps 2 and 4, the remainders have to be arranged in the reverse order so that the first
remainder becomes the Least Significant Digit (LSD) and the last remainder becomes the Most
Significant Digit (MSD).

Decimal Number − 2910 = Binary Number − 111012

2. Decimal Number = 1434410

Remainder

14344 / 2 = 7172 0

7172 / 2 = 3586 0
3586 / 2 = 1793 0

1793 / 2 = 896 1

896 / 2 = 448 0

448 / 2 = 224 0

224 / 2 = 112 0

112/2 = 56 0

56/2 = 28 0

28/2 = 14 0

14/2 = 7 0

7/2 = 3 1

3/2 = 1 1

Decimal Number − 1434410 = Binary Number − 111000000010002

3. Decimal Number = 12.12510

First get the integral part of the decimal and convert it to binary

Remainder

12/2 = 6 0

6/2 = 3 0

3/2 = 1 1

If the decimal number has a fractional part, then the fractional parts are converted into binary by
multiplying it by 2. Only the integer part of the result is noted. Repeat the multiplication until the
fractional part becomes 0.

Integer Part Carry Over

0.125 x 2 = 0.250 0 0.250

0.250 x2 = 0.5 0 0.5


0.5 x2 = 1.0 stop

0.12510 = 0.0012

Decimal Number = 12.12510 = Binary 1100.0012

4. Decimal Number = 10.1610

When converting some decimal fraction parts to binary, sometimes, the carry-over number is repeated
or after 5 or more tries 0 is still not reached, you can stop at the point where the carry over number is
repeated or at the 5th iteration of the step.

Remainder

10/2 = 5 0

5/2 = 2 1

2/2 = 1 0

1010 = 10102
Integer Part Carry Over

0.16 x2 = 0.32 0 0.32

0.32 x2 = 0.64 0 0.64

0.64 x2 = 1.28 1 0.28

0.28 x2 = 0.56 0 0.56

0.56 x2 = 1.12 1 0.12

0.1610 = 0.001012

Decimal Number = 10.1610 = Binary 1010.00101...2 (... means approximately )

5. Decimal Number = 2410 to Octal Number = ?


Remainder

24 / 8 3 0

2410 = 308

6. Decimal Number = 179210 to Octal Number = ?

Remainder

1792 / 8 224 0

224 / 8 28 0

28 / 8 3 4

3/8 0 3

179210 = 34008

7. Decimal Number = 4887910 to Hexa Number = ?

Remainder

48879 / 16 3054 15

3054 / 16 190 14

190 / 16 11 14

11/ 16 0 11

4887910 = BEEF16

Other Base System to Decimal System

 Step 1 − Determine the column (positional) value of each digit (this depends on the position of
the digit and the base of the number system).
 Step 2 − Multiply the obtained column values (in Step 1) by the digits in the corresponding
columns.

 Step 3 − Sum the products calculated in Step 2. The total is the equivalent value in decimal.

Example :

1. Binary = 111012 to Decimal

111012 ((1 × 24) + (1 × 23) + (1 × 22) + (0 × 21) + (1 × 20))10

(16 + 8 + 4 + 0 + 1)10

2910

2. 101.1102 = ?10

101.1102 ((1 x 22) + (0 x 21) + (1 x 20)) . ((1 x 1/2) + (1 x 1/4) + (0 x 1/8))

(4 + 0 + 1) . (0.5 + 0.25)

5.7510

3.Octal = 308 to Decimal

308 ((3 x 81) + (0 x 80))

(24)

2410

4. 505.508 to Decimal

505.508 ((5 x 82) + (0 x 81) + (5 x 80)) . ((5 x 1/8) + (0 x 1/64))

(320 + 5) . (0.625)

325.62510

5. Hexadecimal = BEEF16 to Decimal

BEEF16 (11 x 163) + (14 x 162) + (14 x 161) + (15 x 160)

(45,056 + 3,584 + 224 + 15)

48, 87910

Shortcut method - Binary to Octal

Steps
 Step 1 − Divide the binary digits into groups of three (starting from the right).

 Step 2 − Convert each group of three binary digits to one octal digit.

Example

Binary Number − 101012

101012 010 101

28 58

258

Shortcut method - Octal to Binary

Steps

 Step 1 − Convert each octal digit to a 3 digit binary number (the octal digits may be treated as
decimal for this conversion).

 Step 2 − Combine all the resulting binary groups (of 3 digits each) into a single binary number.

Example

Octal Number − 258

258 210 510

0102 1012

0101012

Shortcut method - Binary to Hexadecimal

Steps

 Step 1 − Divide the binary digits into groups of four (starting from the right).

 Step 2 − Convert each group of four binary digits to one hexadecimal symbol.

Example

Binary Number − 101012

101012 0001 0101

110 510

1516
Shortcut method - Hexadecimal to Binary

Steps

 Step 1 − Convert each hexadecimal digit to a 4 digit binary number (the hexadecimal digits
may be treated as decimal for this conversion).

 Step 2 − Combine all the resulting binary groups (of 4 digits each) into a single binary
number.

Example

Hexadecimal Number − 1516

1516 110 510

00012 01012

000101012

Boolean Logic

We're not talking about philosophical logic: modus ponens and the like. We're talking about boolean
logic aka digital logic.

Boolean logic gets it's name from George Boole who formulated the subject in his 1847 book The
Mathematical Analysis of Logic. Boole defined an algebra (not shockingly, called Boolean Algebra) for
manipulating combinations of True and False values. True and False (we'll use T and F as a shorthand)...
sounds similar to 1 and 0, or on and off. It should be no surprise that boolean algebra is a foundation of
digital circuit design.

Basic Operations

AND

Conjunction: the result is T if, and only if, all inputs are T; if any input is F, the result is F.

OR

Disjunction: the result is T if any (including all) inputs are T; the result is F if, and only if, all inputs are F.

NOT

Negation/Inversion: the result is T if the input is F, and F if the input is T.


Truth Tables

A very useful tool when working with boolean logic is the truth table. For simplicity and consistancy,
we'll use A, B, C, and so on for inputs and Q for output. Truth tables list all possible combinations of
inputs. At this point we limit our discussions to 2 inputs, though logical operations can have any number.
For each combination the corresponding result is noted. As we explore more complex logic circuits we
will find that there can be multiple outputs as well.

Here are tables for AND, OR, and NOT.

Combinations

These basic operations can be combined in any number of ways to build, literally, everything else in a
computer.

One of the more common combinations is NAND, this is simply NOT AND. Likewise NOR is NOT OR.
Exclusive OR/NOR

One more pair of operations that is particularly useful is exclusive-or and it's negation: exclusive-nor.

The XOR is like an OR with the exception that Q is F if all the inputs are T, and T only if some inputs are T.
You could describe this as Q being T when not all inputs are the same.

Of course, XNOR is the opposite: Q is T if, and only if, all inputs are the same.

The Magical NAND


NAND is particularly interesting. Above, we discussed AND, OR, and NOT as fundamental operations. In
fact NAND is THE fundamental operation, in that everything else can be made using just NAND: we say
that NAND is functionally complete. NOR is also functionally complete but when we consider
implementation circuits, NOR generally requires more transistors than NAND.

Look at the truth table for NAND. What happens if the inputs are the same? NAND acts like NOT. So you
can make NOT from a NAND by connecting all its inputs together.

NAND is simply NOT AND, so what is NOT NAND? It's equivalent to NOT NOT AND. The NOTs cancel out
and we are left with AND. So we can make an AND from 2 NANDs, using one to negate the output of the
other.

What about OR, our other basic operation? The truth tables shows what A NAND B is. What happens if
we negate A and B (denoted by placing a bar over A and B)? We have (NOT A) NAND (NOT B). Let's
write out the truth table for that.

That's interesting; it looks just like the truth table for A OR B.

This is a known phenomenon in boolean algebra: De Morgan's law. It was originally formulated by a
Augustus De Morgan, a 19th century mathematician. It defines a relationship between AND and OR.
Specifically if you negate an AND or OR expression, it is equivalent to using the other operation with the
inputs negated:

NOT (A AND B) = (NOT A) OR (NOT B)

and

NOT (A OR B) = (NOT A) AND (NOT B)

How can we use this? Well, NOT (A AND B) is the same as A NAND B so

A NAND B = (NOT A) OR (NOT B)

if we now replace A with (NOT A) and B with (NOT B) we get


(NOT A) NAND (NOT B) = (NOT (NOT A)) OR (NOT (NOT B))

Knowing from above that pairs of NOTs cancel out, we are left with:

(NOT A) NAND (NOT B) = A OR B

BOOM!

NOT can be made with a NAND, so with 3 NANDs we can make OR. Specifically:

(A NAND A) NAND (B NAND B) = A OR B

Data Representation

Computer uses a fixed number of bits to represent a piece of data, which could be a number, a
character, or others. A n-bit storage location can represent up to 2^n distinct entities. For example, a 3-
bit memory location can hold one of these eight binary patterns: 000, 001, 010, 011, 100, 101, 110, or
111. Hence, it can represent at most 8 distinct entities. You could use them to represent numbers 0 to 7,
numbers 8881 to 8888, characters 'A' to 'H', or up to 8 kinds of fruits like apple, orange, banana; or up to
8 kinds of animals like lion, tiger, etc.

Integers, for example, can be represented in 8-bit, 16-bit, 32-bit or 64-bit. You, as the programmer,
choose an appropriate bit-length for your integers. Your choice will impose constraint on the range of
integers that can be represented. Besides the bit-length, an integer can be represented in various
representation schemes, e.g., unsigned vs. signed integers. An 8-bit unsigned integer has a range of 0 to
255, while an 8-bit signed integer has a range of -128 to 127 - both representing 256 distinct numbers.

It is important to note that a computer memory location merely stores a binary pattern. It is entirely up
to you, as the programmer, to decide on how these patterns are to be interpreted. For example, the 8-
bit binary pattern "0100 0001B" can be interpreted as an unsigned integer 65, or an ASCII character 'A',
or some secret information known only to you. In other words, you have to first decide how to
represent a piece of data in a binary pattern before the binary patterns make sense. The interpretation
of binary pattern is called data representation or encoding. Furthermore, it is important that the data
representation schemes are agreed-upon by all the parties, i.e., industrial standards need to be
formulated and straightly followed.

Once you decided on the data representation scheme, certain constraints, in particular, the precision
and range will be imposed. Hence, it is important to understand data representation to write correct
and high-performance programs.

Rosette Stone and the Decipherment of Egyptian Hieroglyphs


Egyptian hieroglyphs (next-to-left) were used by the ancient Egyptians since 4000BC. Unfortunately,
since 500AD, no one could longer read the ancient Egyptian hieroglyphs, until the re-discovery of the
Rosette Stone in 1799 by Napoleon's troop (during Napoleon's Egyptian invasion) near the town of
Rashid (Rosetta) in the Nile Delta.

The Rosetta Stone is inscribed with a decree in 196BC on behalf of King Ptolemy V. The decree appears
in three scripts: the upper text is Ancient Egyptian hieroglyphs, the middle portion Demotic script, and
the lowest Ancient Greek. Because it presents essentially the same text in all three scripts, and Ancient
Greek could still be understood, it provided the key to the decipherment of the Egyptian hieroglyphs.

The moral of the story is unless you know the encoding scheme, there is no way that you can decode the
data.

What is Computer Architecture?

that defines how computer systems, platforms and programs operate. In other words, computer
architecture outlines the system’s design, functionality and compatibility. Creating a computer’s
architecture requires IT professionals to first determine the needs of users, technology limitations and
process requirements.

Architecture Overview

Almost all modern computers use the Von Neumann architecture model that was created by a
mathematician in the 1940’s. This model includes fundamental things like the computer’s CPU, registry,
memory, storage, logic unit and input/output (I/O) interface. Most computer’s architecture can be
divided into three categories. First, the hardware system includes the CPU, direct memory and data and
graphics processors. Second, the Instruction Set Architecture directs the embedded programming
language in the CPU.

The ISA’s programming defines the functions and capabilities of the CPU. Common definitions include
word size, processor types, memory modes, data formats and user instructions. Third, the micro-
architecture is the computer’s international organization that defines data paths, storage, execution and
processing. A computer’s architecture may be created and maintained by systems engineers, application
architects and software engineers.

Computer Architects

Computer architects oversee the implementation of architecture strategies and policies within
companies. They create computer models and standard solutions that save costs, increase capabilities
and align with business needs. Their architectural solutions must deliver stability, availability and
sustainability. Computer architects may deal with server storage, data backup, virtual recovery and
internal applications. In order to produce efficient systems, they must stay up to date on the latest
computer, programming and technology trends.

Computer architects formulate strategies that evolve compute architecture, leverage new features,
explore new capabilities and improve user friendliness. They may be expected to manage and maintain
enterprise-wide architecture patterns, offerings and policies. They may partner with peers and vendors
regarding the integration, alignment and convergence of architectural strategies and standards.
Computer architects may create communications and presentations that articulate the logic behind
programming and production changes.

The Challenge of Architecting

Creating a computer’s architecture, framework and infrastructure can be quite challenging. Computer
architects must be able to present and drive the alignment and adoption of system evolution to
programmers, engineers, designers and leaders. This means that they must be able to gain support and
elicit alignment for project funding, strategies and recommendations. Computer architects may perform
root cause analysis to understand and eliminate reoccurring incidents that impact the architectural
structure and performance.

Senior computer architects may update, maintain and create system architectures that support product
lines and business goals. They may review, modify and approve existing architectural designs through
careful comparative research. Senior computer architects may communicate architecture strategies in
order to convince executive management, technical teams and third-party vendors. Senior computer
architects must have significant experience in the design, development and deployment of enterprise
solutions. They should fully understand computer infrastructure, middleware and integration.

Computer architecture involves the broad infrastructure of modern PCs. All modern computers, mobile
devices and similar technology rely on this architectural knowledge. Anyone who wants to become a
computer architect should consider becoming an electrical, software or computer hardware engineer.

Computer Architecture VS Computer Organization


COMPUTER SYSTEM

a collection of entities(hardware,software and liveware) that are designed to receive, process, manage
and present information in a meaningful format.

COMPONENTS OF COMPUTER SYSTEM

1. Computer hardware - Are physical parts/ intangible parts of a computer. eg Input devices,
output devices, central processing unit and storage devices

2. Computer software - also known as programs or applications. They are classified into two
classes namely - sytem software and application software

3. Liveware - is the computer user. Also kwon as orgware or the humanware. The user commands
the computer system to execute on instructions.

Generations of Computers

Fourth generation

The period of fourth generation was from 1971 to present. The fourth generation computers was
developed using microprocessor. Intel 4004 chip was the first microprocessor developed in 1971. The
microprocessor is a silicon chip contains millions of transistors that was designed using LSI and VLSI
technology.
The fourth generation computers used LSI (Large Scale Integration) and VLSI (Very Large Scale Integration)
technology. Using LSI and VLSI technology thousands of transistors are integrated on a small silicon chip.
In fourth generation computers the semiconductor memory is replaced by magnetic core memory
resulting in fast random access to memory.

Several operating systems like MS-DOS and MS windows developed during this time. The instructions to
the computer were written in high level language instead of machine language and assembly language.

Advantages

1. More reliable than previous generation computers.

2. Perform calculations in Picoseconds.

3. Consumes less power than the previous generation computers.

4. No air conditioning is required.

5. Totally general purpose.

6. Cost is low compared to the previous generation computers.

7. All types of high level languages is used for fourth generation computers.

8. Maintenance cost is low compared to the previous generation computers.

9. Fourth generation computers are portable.

10. Generates less heat than the previous generation computers.

11. Learning high level language is easier than assembly and machine language.

Disadvantages

1. Latest technology was required for manufacturing of microprocessors.

Fifth generation

The fifth generation computers are still in development. Scientists are working on fifth generation
computers. The main aim of fifth generation computing is to develop computers that are respond to
surroundings using different types of sensors and capable of learning. Fifth generation computers use
super large scale integrated (SLSI) chips that contains millions of components on a single chip.

These computers use parallel processing where instructions are executed in parallel manner. Parallel
processing is much faster than serial processing. In serial processing each task is performed in serial
manner. Where as in parallel processing multiple tasks are performed simultaneously. Fifth generation
computers are based on artificial intelligence. The fifth generation computers are also called artificial
intelligence computers.

Stored-program concept

Storage of instructions in computer memory to enable it to perform a variety of tasks in sequence or


intermittently. The idea was introduced in the late 1940s by John von Neumann, who proposed that a
program be electronically stored in binary-number format in a memory device so that instructions could
be modified by the computer as determined by intermediate computational results. Other engineers,
notably John W. Mauchly and J. Presper Eckert, contributed to this idea, which enabled digital computers
to become much more flexible and powerful. Nevertheless, engineers in England built the first stored-
program computer, the Manchester Mark I, shortly before the Americans built EDVAC, both operational
in 1949.

Von Neumann Architecture

A von Neumann architecture machine, designed by physicist and mathematician John von Neumann
(1903–1957) is a theoretical design for a stored program computer that serves as the basis for almost all
modern computers. A von Neumann machine consists of a central processor with an arithmetic/logic unit
and a control unit, a memory, mass storage, and input and output.

The von Neumann machine was created by its namesake, John von Neumann, a physicist and
mathematician, in 1945, building on the work of Alan Turing. The design was published in a document
called "First Draft of a Report on the EDVAC."

The report described the first stored-program computer. Earlier computers, such as the ENIAC, were hard-
wired to do one task. If the computer had to perform a different task, it had to be rewired, which was a
tedious process. With a stored-program computer, a general purpose computer could be built to run
different programs.

The theoretical design consists of:

1. A central processor consisting of a control unit and an arithmetic/logic unit

2. A memory unit

3. Mass storage

4. Input and output

The von Neumann design thus forms the basis of modern computing. A similar model, the Harvard
architecture, had dedicated data address and buses for both reading and writing to memory. The von
Neumann architecture won out because it was simpler to implement in real hardware.
Moore's Law

Moore's Law is the observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of
transistors per square inch on integrated circuits had doubled every year since the integrated circuit was
invented. Moore predicted that this trend would continue for the foreseeable future. In subsequent years,
the pace slowed down a bit, but data density has doubled approximately every 18 months, and this is the
current definition of Moore's Law, which Moore himself has blessed. Most experts, including Moore
himself, expect Moore's Law to hold true until 2020-2025.

General Purpose Registers (AX,BX,CX,DX,SP,BP,SI,DI) and ALU in Intel 8086

General Purpose Registers

In computer architecture, a processor register is a quickly accessible location available to a computer's


central processing unit (CPU). Registers usually consist of a small amount of fast storage, although some
registers have specific hardware functions, and may be read-only or write-only. Registers are normally
measured by the number of bits they can hold, for example, an "8-bit register", "32-bit register" or a "64-
bit register" (or even with more bits).

General Registers or General Purpose Registers are a kind of registers which can store both data and
addresses. All general registers of the intel 8086 microprocessor can be used for arithmetic and logic
operations.

AX (Accumulator)

This is accumulator register. It gets used in arithmetic, logic and data transfer instructions. In manipulation
and division, one of the numbers involved must be in AX or AL

BX (Base Register)

This is base register. BX register is an address register. It usually contain a data pointer used for based,
based indexed or register indirect addressing.

CX (Count register)

This is Count register. This serves as a loop counter. Program loop constructions are facilitated by it. Count
register can also be used as a counter in string manipulation and shift/rotate instruction.

DX (Data Register)

This is data register. Data register can be used as a port number in I/O operations. It is also used in
multiplication and division.
SP (Stack Pointer)

This is stack pointer register pointing to program stack. It is used in conjunction with SS for accessing the
stack segment.

BP (Base Pointer)

This is base pointer register pointing to data in stack segment. Unlike SP, we can use BP to access data in
the other segments.

SI (Source Index)

This is source index register which is used to point to memory locations in the data segment addressed by
DS. Thus when we increment the contents of SI, we can easily access consecutive memory locations.

DI (Destination Index)

This is destination index register performs the same function as SI. There is a class of instructions called
string operations, that use DI to access the memory locations addressed by ES.

ALU (Arithmetic & Logic Unit)

This unit can perform various arithmetic and logical operation, if required, based on the instruction to be
executed. It can perform arithmetical operations, such as add, subtract, increment, decrements, convert
byte/word and compare etc and logical operations, such as AND, OR, exclusive OR, shift/rotate and test
etc.

Arithmetic and Logic Unit is a like a calculator to a computer. ALU performs all arithmetic operations along
with decision making functions. In modern CPU or Microprocessors, there can be more than one
integrated ALU to speed up arithmetical and logical operations, such as; integer unit, floating point unit
etc.

Organization of ALU

Various circuits are required to process data or perform arithmetical operations which are connected to
microprocessor's ALU. Accumulator and Data Buffer stores data temporarily. These data are processed as
per control instructions to solve problems. Such problems are addition, multiplication etc.

Functions of ALU

Functions of ALU or Arithmetic & Logic Unit can be categorized into following 3 categories

1. Arithmetic Operations

Additions, multiplications etc. are example of arithmetic operations. Finding greater than or smaller than
or equality between two numbers by using subtraction is also a form of arithmetic operations.

2. Logical Operations
Operations like AND, OR, NOR, NOT etc. using logical circuitry are examples of logical operations.

3. Data Manipulations

Operations such as flushing a register is an example of data manipulation. Shifting binary numbers are
also example of data manipulation.

What is control unit? What are the functions of control unit?

Control Unit

Control unit co-ordinates the transfer of data between registers of CPU or microprocessor and ALU.
Control unit serves the instructions for ALU. Along with this, control unit, as its name implies, controls
every other parts of the machine, their co-ordinations, traffic etc.

Thus control unit controls the complete work-flow of the CPU. But control unit doesn't take inputs, give
outputs, process data or store data itself, what control unit do is, it controls these operations when they
are performed by respective devices.

The purpose of control unit to run the whole computer. And control unit is ran by the instructions stored
in RAM and ROM. So, control unit receives instructions which are stored in RAM and ROM and controls
operations of other connected units or devices through those instructions.

Functions of Control Unit

Functions of control unit can be categorized into following 5 categories:

1. Fetching instructions one by one from primary memory and gather required data and operands
to perform those instructions.

2. Sending instructions to ALU to perform additions, multiplication etc.

3. Receiving and sending results of operations of ALU to primary memory

4. Fetching programs from input and secondary memory and bringing them to primary memory

5. Sending results from ALU stored in primary memory to output

System Bus

The system bus connects the CPU with the main memory and, in some systems, with the level 2 (L2) cache.
Other buses, such as the IO buses, branch off from the system bus to provide a communication channel
between the CPU and the other peripherals.
1. The system bus combines the functions of the three main buses, which are as follows:

2. The control bus carries the control, timing and coordination signals to manage the various
functions across the system.

3. The address bus is used to specify memory locations for the data being transferred.

The data bus, which is a bidirectional path, carries the actual data between the processor, the memory
and the peripherals.

The design of the system bus varies from system to system and can be specific to a particular computer
design or may be based on an industry standard. One advantage of using the industry standard is the ease
of upgrading the computer using standard components such as the memory and IO devices from
independent manufacturers.

System bus characteristics are dependent on the needs of the processor, the speed, and the word length
of the data and instructions. The size of a bus, also known as its width, determines how much data can be
transferred at a time and indicates the number of available wires. A 32-bit bus, for example, refers to 32
parallel wires or connectors that can simultaneously transmit 32 bits.

The design and dimensions of the system bus are based on the specific processor technology of the
motherboard. This, in effect, affects the speed of the motherboard, with faster system buses requiring
that the other components on the system be equally fast for the best performance.
Instructions are processed under the direction of the control unit in a step-by-step manner. The machine
instruction cycle is the time period during which one instruction is fetched from memory and executed
when a computer is given an instruction in machine language.

There are four fundamental steps in the instruction cycle:

1. Fetch the instruction The next instruction is fetched from the memory address that is currently stored
in the Program Counter (PC), and stored in the Instruction register (IR). At the end of the fetch operation,
the PC points to the next instruction that will be read at the next cycle.

2. Decode the instruction The decoder interprets the instruction. During this cycle the instruction inside
the IR (instruction register) gets decoded.

3. Execute The Control Unit of CPU passes the decoded information as a sequence of control signals to
the relevant function units of the CPU to perform the actions required by the instruction such as reading
values from registers, passing them to the ALU to perform mathematical or logic functions on them, and
writing the result back to a register. If the ALU is involved, it sends a condition signal back to the CU.

4. Store result The result generated by the operation is stored in the main memory, or sent to an output
device. Based on the condition of any feedback from the ALU, Program Counter may be updated to a
different address from which the next instruction will be fetched.

Data Bus: A data bus is a connection between the different parts of a computer that information is sent
on.
Address Bus: The address bus is a data bus that is used to specify a physical address. A CPU will specify
the memory location.

Memory Organization in Computer Architecture

A memory unit is the collection of storage units or devices together. The memory unit stores the binary
information in the form of bits. Generally, memory/storage is classified into 2 categories:

1. RAM / Volatile Memory: This loses its data, when power is switched off.

2. ROM / Non-Volatile Memory: This is a permanent storage and does not lose any data when power
is switched off.

The total memory capacity of a computer can be visualized by hierarchy of components. The memory
hierarchy system consists of all storage devices contained in a computer system from the slow Auxiliary
Memory to fast Main Memory and to smaller Cache memory.

Auxillary memory access time is generally 1000 times that of the main memory, hence it is at the bottom
of the hierarchy.

The main memory occupies the central position because it is equipped to communicate directly with the
CPU and with auxiliary memory devices through Input/output processor (I/O).
When the program not residing in main memory is needed by the CPU, they are brought in from auxiliary
memory. Programs not currently needed in main memory are transferred into auxiliary memory to
provide space in main memory for other programs that are currently in use.

The cache memory is used to store program data which is currently being executed in the CPU.
Approximate access time ratio between cache memory and main memory is about 1 to 7~10

Memory Access Methods

Each memory type, is a collection of numerous memory locations. To access data from any memory, first
it must be located and then the data is read from the memory location. Following are the methods to
access information from memory locations:

Random Access: Main memories are random access memories, in which each memory location has a
unique address. Using this unique address any memory location can be reached in the same amount of
time in any order.

Sequential Access: This methods allows memory access in a sequence or in order.

Direct Access: In this mode, information is stored in tracks, with each track having a separate read/write
head.

Main Memory

The memory unit that communicates directly within the CPU, Auxillary memory and Cache memory, is
called main memory. It is the central storage unit of the computer system. It is a large and fast memory
used to store data during computer operations. Main memory is made up of RAM and ROM, with RAM
integrated circuit chips holing the major share.

RAM: Random Access Memory

1. DRAM: Dynamic RAM, is made of capacitors and transistors, and must be refreshed every 10~100
ms. It is slower and cheaper than SRAM.
2. SRAM: Static RAM, has a six transistor circuit in each cell and retains data, until powered off.

3. NVRAM: Non-Volatile RAM, retains its data, even when turned off. Example: Flash memory.

ROM: Read Only Memory, is non-volatile and is more like a permanent storage for information. It also
stores the bootstrap loader program, to load and start the operating system when computer is turned on.
PROM(Programmable ROM), EPROM(Erasable PROM) and EEPROM(Electrically Erasable PROM) are some
commonly used ROMs.

Auxiliary Memory

Devices that provide backup storage are called auxiliary memory. For example: Magnetic disks and tapes
are commonly used auxiliary devices. Other devices used as auxiliary memory are magnetic drums,
magnetic bubble memory and optical disks.

It is not directly accessible to the CPU, and is accessed using the Input/Output channels.

Cache Memory

The data or contents of the main memory that are used again and again by CPU, are stored in the cache
memory so that we can easily access that data in shorter time.

Whenever the CPU needs to access memory, it first checks the cache memory. If the data is not found in
cache memory then the CPU moves onto the main memory. It also transfers block of recent data into the
cache and keeps on deleting the old data in cache to accomodate the new one.

Hit Ratio

The performance of cache memory is measured in terms of a quantity called hit ratio. When the CPU
refers to memory and finds the word in cache it is said to produce a hit. If the word is not found in cache,
it is in main memory then it counts as a miss.

The ratio of the number of hits to the total CPU references to memory is called hit ratio.

Hit Ratio = Hit/(Hit + Miss)

Associative Memory

It is also known as content addressable memory (CAM). It is a memory chip in which each bit position can
be compared. In this the content is compared in each bit cell which allows very fast table lookup. Since
the entire chip can be compared, contents are randomly stored without considering addressing scheme.
These chips have less storage capacity than regular memory chips.

Cache processing

Cache processing in some computers is divided into two sections: main cache and eavesdrop cache. Main
cache is initiated by the CPU within. Eavesdrop is done when a write to memory is performed by another
requestor (other CPU or IOC). Eavesdrop searches have no impact on CPU performances.

CACHE MAPPING TECHNIQUES


Cache mapping is the method by which the contents of main memory are brought into the cache and
referenced bythe CPU. The mapping method used directly affects the performance of the entire
computer system.

1. Direct mapping —Main memory locations canonly be copied into one location in the cache. This is
accomplished by dividing main memory into pages that correspond in size with the cache .

2. Fully associative mapping —Fully associative cache mapping is the most complex, but it is most flexible
with regards to where data can reside. A newly read block of main memory can be placed anywhere ina
fully associative cache. If the cache is full, a replacement algorithm is used to determine which blockin
the cache gets replaced by the new data
3. Set Associative mapping —Set Associative cache mapping combines the best of direct and associative
cache mapping techniques. As with a direct mapped cache, blocks of main memory data will still map into
as specific set, but they can now be in any N-cache blockframes within each set.

System software

System software is a type of computer program that is designed to run a computer’s hardware and
application programs. If we think of the computer system as a layered model, the system software is the
interface between the hardware and user applications. The operating system (OS) is the best-known
example of system software. The OS manages all the other programs in a computer.

Other examples of system software include:

 The BIOS (basic input/output system) gets the computer system started after you turn it on and
manages the data flow between the operating system and attached devices such as the hard disk,
video adapter, keyboard, mouse and printer.

 The boot program loads the operating system into the computer's main memory or random
access memory (RAM).

 An assembler takes basic computer instructions and converts them into a pattern of bits that the
computer's processor can use to perform its basic operations.

 A device driver controls a particular type of device that is attached to your computer, such as a
keyboard or a mouse. The driver program converts the more general input/output instructions of
the operating system to messages that the device type can understand.

Additionally, system software can also include system utilities, such as the disk defragmenter and System
Restore, and development tools, such as compilers and debuggers.
System software and application programs are the two main types of computer software. Unlike system
software, an application program (often just called an application or app) performs a particular function
for the user. Examples include browsers, email clients, word processors and spreadsheets.

Application software

Application software is a program or group of programs designed for end users. These programs are
divided into two classes: system software and application software. While system software consists of
low-level programs that interact with computers at a basic level, application software resides above
system software and includes applications such as database programs, word processors and spreadsheets.
Application software may be bundled with system software or published alone.

Application software may simply be referred to as an application.

Different types of application software include:

 Application Suite: Has multiple applications bundled together. Related functions, features and
user interfaces interact with each other.

 Enterprise Software: Addresses an organization's needs and data flow in a huge distributed
environment

 Enterprise Infrastructure Software: Provides capabilities required to support enterprise software


systems

 Information Worker Software: Addresses individual needs required to manage and create
information for individual projects within departments

 Content Access Software: Used to access content and addresses a desire for published digital
content and entertainment

 Educational Software: Provides content intended for use by students

 Media Development Software: Addresses individual needs to generate and print electronic media
for others to consume

Operating System (OS)

An operating system (OS), in its most general sense, is software that allows a user to run other applications
on a computing device. While it is possible for a software application to interface directly with hardware,
the vast majority of applications are written for an OS, which allows them to take advantage of common
libraries and not worry about specific hardware details.

The operating system manages a computer's hardware resources, including:

 Input devices such as a keyboard and mouse.


 Output devices such as display monitors, printers and scanners.

 Network devices such as modems, routers and network connections.

 Storage devices such as internal and external drives.

The OS also provides services to facilitate the efficient execution and management of, and memory
allocations for, any additional installed software application programs.

Some operating systems were developed in the 1950s, when computers could only execute one program
at a time. Later in the decade, computers included many software programs, sometimes called libraries,
which were linked together to create the beginning of today's operating systems.

The OS consists of many components and features. Which features are defined as part of the OS vary with
each OS. However, the three most easily defined components are:

1. Kernel: This provides basic-level control over all of the computer hardware devices. Main roles
include reading data from memory and writing data to memory, processing execution orders,
determining how data is received and sent by devices such as the monitor, keyboard and mouse,
and determining how to interpret data received from networks.

2. User Interface: This component allows interaction with the user, which may occur through
graphical icons and a desktop or through a command line.

3. Application Programming Interfaces: This component allows application developers to write


modular code.

Examples for OSs include Android, iOS, Mac OS X, Microsoft Windows and Linux.

Computer Networking And Internet Protocols

There are more than 2.5 billion devices currently connected to the internet, with almost 8 new internet
users being added every second worldwide. If this does not amaze you, it is estimated that by 2020, more
than 200 billion sensors will come online and report their data seamlessly over the internet. With such a
rich ecosystem, the internet needs to be very rigid, well planned and structured. In this new series, I will
try my best to cover the different network elements and protocols that build up this huge system.

What is Computer Networking?

By definition, a computer network is a group of computers that are linked together through a
communication channel.
All the computer devices are called hosts or end systems. Hosts sending requests are called clients while
hosts receiving requests are called servers. End systems are connected together by a network of
communication links and packet switches. Communication links are made up of different types of physical
media, including coaxial cable, copper wide, optical fiber, and radio spectrum. Different links can transmit
data at different rates, with the transmission rate of a link measured in bits/second. When one end system
has data to send to another end system, the sending end system segments the data and adds header
bytes to each segment. The resulting packages of information, known as packets, are then sent through
the network to the destination end system, where they are reassembled into the original data. A packet
switch takes a packet arriving on one of its incoming communication links and forwards that packet on
one of its outgoing communication links. Common packet switches are routers and link-layer switches.

Internet Protocols

End systems, packet switches and other pieces of the Internet run protocols that control the sending and
receiving of information within the Internet. The Transmission Control Protocol (TCP) and the Internet
Protocol (IP) are two of the most important protocols in the Internet. The IP protocol specifies the format
of the packets that are sent and received among router and end systems. The Internet’s principal
protocols are collectively known as TCP/IP.
Given the importance of protocols to the Internet, it’s important that everyone agrees on what each and
every protocol does, so that people can create systems and products that interoperate. Internet standards
are developed by the Internet Engineering Task Force (IETF) into documents called requests for comments
(RFCs). RFCs tend to be quite technical and detailed. These define protocols such as TCP, IP, HTTP, DNS
and SMTP. There are currently more than 6,000 RFCs.

Layered Architecture

Internet Protocols are organized in a layered architecture. To explain this, let’s imagine you are trying to
look for a meme from Google’s image search. The server has the image of your choice stored. Before the
server sends out that information, it needs to convert that image into a package with all the necessary
headers. Once it reaches the client, the reverse takes place. The packets have their headers removed in
the reverse sequence and then converted back to the original data.
Now, to take a step further and look at the individual protocol layers. Below are the common TCP/IP model
and the more detailed OSI model. For our analogy (and probably the rest of the series), I will stick to the
TCP/IP model.
From the server to the internet, the sequence is top-down. When the server finds the Grumpy Cat image
that the client requested, it will first convert it to a packet and add the Application Layer header. This
protocol includes HTTP (which provides for Web document request and transfer), SMTP (which provides
for transfer of e-mail messages), and FTP (which provides for the transfer of file between two end
systems). After that, the Transport Layer protocol adds the necessary changes. This later transports
application-layer messages between application endpoints. Common Transport Layer protocols are TCP
(which provides connection-oriented services to its applications) and UDP (which provides connection-
less services to its applications). The Network Layer comes right after. This protocol is responsible for
moving network-layer packets or datagrams from one host to another. Finally, the Network Access Layer
takes care of the transfer across the communication links. Once the server receives the package, the entire
process takes place in reverse (bottom-up) to transform the packets back to the original image.

You might also like