Download as pdf or txt
Download as pdf or txt
You are on page 1of 124

kn Unäv

CENTRE FOR DEVELOPMENT OF IMAGING TECHNOLOGY


(Under Government of Kerala)

TECHNOLOGY EXTENSION DIVISION


Ground Floor, TC 26/322 (3), Chittezham Lavanya, Behind SMSM Institute
Statue, Thiruvananthapuram - 695 001
Phone Number 0471- 2321360, 2322100
Website :www.tet.cdit.org
E-mail: tet@cdit.org

computer organisation and


operating system

study materiaL
Version 1

For Private Circulation Only


Study Material
Printed on 2023
by
CENTRE FOR DEVELOPMENT OF IMAGING TECHNOLOGY
(Under Government of Kerala)

C-DIT – Head Office


Chithranjali Hills, Thiruvallam P.O.,
Trivandrum - 695027.
Ph: 0471-2380910, 2380912
E-mail:cdit@cdit.org
Web:www.cdit.org

©
Centre for Development of Imaging Technology (C-DIT)
This edition is authorised for Private Circulation Only
All rights reserved. This publication may not be reproduced in any way.
preface

This course material has been prepared with a broad


perspective by a team of Industry and Academic experts in tune with
the latest industry requirements and trends. This book is designed to
be the clear, concise, normative reference to Computer Organisation
and Operating System.
The chapters are designed to help you understand and grasp
the concepts related to the subject matter. We hope that you find this
material informative and useful in your studies. Please note that this
material should not be considered a substitute for your coursework
or required reading. We encourage you to actively engage with the
material and seek additional resources as needed. We have included
various exercises and activities to help reinforce the concepts
discussed and encourage active engagement.
We request every user of this course material to send the
feedback to us. The suggestions and opinions of the users will serve
as a guideline for future modifications and improvements.
CONTENTS

si. no. subjects page no.

1 Module I - Processor Organisation 07

2 Module II - Memory Organisation 35

3 Module III - Input Output Organisation 59

4 Module IV - Operating System 75

5 Module V - Linux Operating System 106


kn Unäv

module i
processor organisation

functional units of computer (the Von neumann machine)


A von Neumann architecture machine, designed by physicist and mathematician John von Neumann
(1903–1957) is a theoretical design for a stored program computer that serves as the basis for almost
all modern computers
Above Figure shows the general structure of a computer.
• A main memory, which stores both data and instructions.
• An arithmetic and logic unit (ALU) capable of operating on binary data.
• A control unit, which interprets the instructions in memory and causes them to be executed.
• Input and output (I/O) equipment operated by the control unit.

Fig 1.1

computer function (fetch and execute)


The basic function performed by a computer is execution of a program, which consists of a
set of instructions stored in memory. The processor does the actual work by executing instructions
specified in the program. In its simplest form, instruction processing consists of two steps: The
processor reads (fetches) instructions from memory one at a time and executes each instruction.
Program execution consists of repeating the process of instruction fetch and instruction execution.
The instruction execution may involve several operations and depends on the nature of the instruction.

Computer Organisation and Operating System 7


kn Unäv

Fig 1.2

instruction fetch and execute


At the beginning of each instruction cycle, the processor fetches an instruction from memory.
In a typical processor, a register called the program counter (PC) holds the address of the instruction to
be fetched next. Unless told otherwise, the processor always increments the PC after each instruction
fetch so that it will fetch the next instruction in sequence (i.e., the instruction located at the next
higher memory address). So, for example, consider a computer in which each instruction occupies
one 16-bit word of memory. Assume that the program counter is set to location 300.The processor will
next fetch the instruction at location 300. On succeeding instruction cycles, it will fetch instructions
from locations 301, 302, 303, and so on. This sequence may be altered, as explained presently. The
fetched instruction is loaded into a register in the processor known as the instruction register (IR). The
instruction contains bits that specify the action the processor is to take. The processor interprets the
instruction and performs the required action. In general, these actions fall into four categories.
• processor-memory: Data may be transferred from processor to memory or from memory to
processor.
• processor-i/o: Data may be transferred to or from a peripheral device by transferring be-
tween the processor and an I/O module.
• data processing: The processor may perform some arithmetic or logic operation on data.
• control: An instruction may specify that the sequence of execution be altered. For example,
the processor may fetch an instruction from location 149, which specifies that the next in-
struction be from location 182. The processor will remember this fact by setting the program
counter to 182.Thus, on the next fetch cycle, the instruction will be fetched from location 182
rather than 150.

8 Computer Organisation and Operating System


kn Unäv

Fig 1.3
For example, the PDP-11 processor includes an instruction, expressed symbolically as ADD B,A,
that stores the sum of the contents of memory locations B and A into memory location A. A single
instruction cycle with the following steps occurs:
• Fetch the ADD instruction.
• Read the contents of memory location A into the processor.
• Read the contents of memory location B into the processor. In order that the
• contents of A are not lost, the processor must have at least two registers for
• storing memory values, rather than a single accumulator.
• Add the two values.
• Write the result from the processor to memory location A.

processor organization
To understand the organization of the processor, let us consider the requirements placed on the
processor, the things that it must do:
• fetch instruction: The processor reads an instruction from memory (register, cache, main
memory).
• interpret instruction: The instruction is decoded to determine what action is required.
• fetch data: The execution of an instruction may require reading data from memory or an I/O
module.
• process data: The execution of an instruction may require performing some arithmetic or
logical operation on data.
• Write data: The results of an execution may require writing data to memory or an I/O module.

Computer Organisation and Operating System 9


kn Unäv

Fig 1.4
Figure 1.4 shows a simplified view of a processor, indicating its connection to the rest of the system
via the system bus. The major components of the processor are an arithmetic and logic unit (ALU)
and a control unit (CU). The ALU does the actual computation or processing of data. The control
unit controls the movement of data and instructions into and out of the processor and controls the
operation of the ALU. In addition, the figure shows a minimal internal memory, consisting of a set of
storage locations, called registers.

Fig 1.5
Figure is a slightly more detailed view of the processor. The data transfer and logic control
paths are indicated, including an element labelled internal processor bus. This element is needed to
transfer data between the various registers and the ALU because the ALU in fact operates only on data
in the internal processor memory. The figure also shows typical basic elements of the ALU.

10 Computer Organisation and Operating System


kn Unäv

register organization
A computer system employs a memory hierarchy. At higher levels of the hierarchy, memory
is faster, smaller, and more expensive (per bit). Within the processor, there is a set of registers that
function as a level of memory above main memory and cache in the hierarchy. The registers in the
processor perform two roles:
• user-visible registers: Enable the machine- or assembly language programmer to minimize
main memory references by optimizing use of registers.
• control and status registers: Used by the control unit to control the operation of the processor
and by privileged, operating system programs to control the execution of programs.

User-Visible Registers
A user-visible register is one that may be referenced by means of the machine language that the
processor executes. We can characterize these in the following categories:
• General purpose
• Data
• Address
• Condition codes

general-purpose registers
general-purpose registers can be assigned to a variety of functions by the programmer. Sometimes
their use within the instruction set is orthogonal to the operation. That is, any general-purpose register
can contain the operand for any opcode. In some cases, general-purpose registers can be used for
addressing functions (e.g., register indirect, displacement). In other cases, there is a partial or clean
separation between data registers and address registers. data registers may be used only to hold data
and cannot be employed in the calculation of an operand address. address registers may themselves
be somewhat general purpose, or they may be devoted to a particular addressing mode.
• segment pointers: In a machine with segmented addressing (see Section 8.3), a segment register
holds the address of the base of the segment. There may be multiple registers: for example, one for
the operating system and one for the current process.
• index registers: These are used for indexed addressing and may be auto indexed.
• stack pointer: If there is user-visible stack addressing, then typically there is a dedicated register
that points to the top of the stack. This allows implicit addressing; that is, push, pop, and other stack
instructions need not contain an explicit stack operand.
A final category of registers, which is at least partially visible to the user, holds condition codes (also
referred to as flags). Condition codes are bits set by the processor hardware as the result of operations.
For example, an arithmetic operation may produce a positive, negative, zero, or overflow result. In
addition to the result itself being stored in a register or memory, a condition code is also set. The code
may subsequently be tested as part of a conditional branch operation.

Computer Organisation and Operating System 11


kn Unäv

Control and Status Registers


There are a variety of processor registers that are employed to control the operation of the
processor. Most of these, on most machines, are not visible to the user. Some of them may be visible
to machine instructions executed in a control or operating system mode.

Four registers are essential to instruction execution:


• program counter (pc): Contains the address of an instruction to be fetched
• instruction registers (ir): Contains the instruction most recently fetched
• memory address register (mar): Contains the address of a location in memory
• Memory buffer register (MBR): Contains a word of data to be written to memory or the
word most recently read the processor updates the PC after each instruction fetch so that the
PC always points to the next instruction to be executed. A branch or skip instruction will also
modify the contents of the PC. The fetched instruction is loaded into an IR, where the opcode
and operand specifiers are analysed. Data are exchanged with memory using the MAR and
MBR. In a bus-organized system, the MAR connects directly to the address bus, and the MBR
connects directly to the data bus. User visible registers, in turn, exchange data with the MBR.
The ALU may have direct access to the MBR and user-visible registers.

Many processor designs include a register or set of registers, often known as the program status word
(PSW), that contain status information. The PSW typically contains condition codes plus other status
information. Common fields or flags include the following:

• sign: Contains the sign bit of the result of the last arithmetic operation.
• Zero: Set when the result is 0.
• carry: Set if an operation resulted in a carry (addition) into or borrow (subtraction) out of a
high-order bit. Used for multiword arithmetic operations.
• equal: Set if a logical compare result is equality.
• Overflow: Used to indicate arithmetic overflow.
• interrupt enable/disable: Used to enable or disable interrupts.
• supervisor: Indicates whether the processor is executing in supervisor or user mode. Certain
privileged instructions can be executed only in supervisor mode, and certain areas of memory
can be accessed only in supervisor mode.

12 Computer Organisation and Operating System


kn Unäv

instruction cycle
• fetch: Read the next instruction from memory into the processor.
• execute: Interpret the opcode and perform the indicated operation.
• interrupt: If interrupts are enabled and an interrupt has occurred, save the current process
state and service the interrupt.

Fig 1.6

the indirect cycle


Once an instruction is fetched, the next step is to fetch source operands. Continuing our simple
example, let us assume a one-address instruction format, with direct and indirect addressing allowed.
If the instruction specifies an indirect address, then an indirect cycle must precede the execute cycle.

Data Flow
The exact sequence of events during an instruction cycle depends on the design of the processor. Let
us assume that a processor that employs a memory address register (MAR), a memory buffer register
(MBR), a program counter (PC), and an instruction register (IR).

During the fetch cycle, an instruction is read from memory. Figure 1.7 shows the flow of data during
this cycle. The PC contains the address of the next instruction to be fetched. This address is moved
to the MAR and placed on the address bus.

Computer Organisation and Operating System 13


kn Unäv

Fig 1.7

Fig 1.8
The control unit requests a memory read, and the result is placed on the data bus and copied into the
MBR and then moved to the IR. Meanwhile, the PC is incremented by 1, preparatory for the next
fetch. Once the fetch cycle is over, the control unit examines the contents of the IR to determine if
it contains an operand specifier using indirect addressing. If so, an indirect cycle is performed. As
shown in Figure 1.9, this is a simple cycle. The rightmost N bits of the MBR, which contain the
address reference, are transferred to the MAR. Then the control unit requests a memory read, to get
the desired address of the operand into the MBR.
The fetch and indirect cycles are simple and predictable. The execute cycle takes many forms; the
form depends on which of the various machine instructions is in the IR. This cycle may involve
transferring data among registers, read or write from memory or I/O, and/or the invocation of the
ALU.

14 Computer Organisation and Operating System


kn Unäv

Fig 1.9
Like the fetch and indirect cycles, the interrupt cycle is simple and predictable (Figure 1.10). The
current contents of the PC must be saved so that the processor can resume normal activity after the
interrupt. Thus, the contents of the PC are transferred to the MBR to be written into memory. The
special memory location reserved for this purpose is loaded into the MAR from the control unit. It
might, for example, be a stack pointer. The PC is loaded with the address of the interrupt routine. As
a result, the next instruction cycle will begin by fetching the appropriate instruction.

Fig 1.10
instruction pipelining
pipelining strategy
Instruction pipelining is similar to the use of an assembly line in a manufacturing plant. An assembly
line takes advantage of the fact that a product goes through various stages of production. By laying the
production process out in an assembly line, products at various stages can be worked on simultaneously.
This process is also referred to as pipelining, because, as in a pipeline, new inputs are accepted at
one end before previously accepted inputs appear as outputs at the other end. To apply this concept to
instruction execution, we must recognize that, in fact, an instruction has a number of stages. Figures
1.7, for example, breaks the instruction cycle up into 10 tasks, which occur in sequence. Clearly, there
should be some opportunity for pipelining.

Computer Organisation and Operating System 15


kn Unäv

Fig 1.11
As a simple approach, consider subdividing instruction processing into two stages: fetch instruction
and execute instruction. There are times during the execution of an instruction when main memory is
not being accessed. This time could be used to fetch the next instruction in parallel with the execution
of the current one. Figure 1.11a depicts this approach. The pipeline has two independent stages.
The first stage fetches an instruction and buffers it. When the second stage is free, the first stage
passes it the buffered instruction. While the second stage is executing the instruction, the first stage
takes advantage of any unused memory cycles to fetch and buffer the next instruction. This is called
instruction prefetch or fetch overlap. Note that this approach, which involves instruction buffering,
requires more registers. In general, pipelining requires registers to store data between stages. It should
be clear that this process will speed up instruction execution. If the fetch and execute stages were of
equal duration, the instruction cycle time would be halved. However, if we look more closely at this
pipeline (Figure 1.11b), we will see that this doubling of execution rate is unlikely for two reasons:
1. The execution time will generally be longer than the fetch time. Execution will involve reading and
storing operands and the performance of some operation. Thus, the fetch stage may have to wait for
some time before it can empty its buffer.
2. A conditional branch instruction makes the address of the next instruction to be fetched unknown.
Thus, the fetch stage must wait until it receives the next instruction address from the execute stage.
The execute stage may then have to wait while the next instruction is fetched.
Guessing can reduce the time loss from the second reason. A simple rule is the following: When a
conditional branch instruction is passed on from the fetch to the execute stage, the fetch stage fetches
the next instruction in memory after the branch instruction. Then, if the branch is not taken, no time
is lost. If the branch is taken, the fetched instruction must be discarded and a new instruction fetched.
While these factors reduce the potential effectiveness of the two-stage pipeline, some speedup
occurs. To gain further speedup, the pipeline must have more stages. Let us consider the following
decomposition of the instruction processing.
• fetch instruction (fi): Read the next expected instruction into a buffer.
• decode instruction (di): Determine the opcode and the operand specifiers.

16 Computer Organisation and Operating System


kn Unäv

• calculate operands (co): Calculate the effective address of each source operand. This may
involve displacement, register indirect, indirect, or other forms of address calculation.
• fetch operands (fo): Fetch each operand from memory. Operands in registers need not be
fetched.
• execute instruction (ei): Perform the indicated operation and store the result, if any, in the
specified destination operand location.
• Write operand (Wo): Store the result in memory.
With this decomposition, the various stages will be of more nearly equal duration. For the sake of
illustration, let us assume equal duration. Using this assumption, Figure 12.10 shows that a six-stage
pipeline can reduce the execution time for 9 instructions from 54-time units to 14-time units.

Fig 1.12
Below figure indicates the logic needed for pipelining to account for branches and interrupts.

Fig 1.13

Computer Organisation and Operating System 17


kn Unäv

pipeline performance
The cycle time of an instruction pipeline is the time needed to advance a set of instructions one stage
through the pipeline; each column in Figures 12.10 represents one cycle time. The cycle time can be
determined as

pipeline Hazards

A pipeline hazard occurs when the pipeline, or some portion of the pipeline, must stall because
conditions do not permit continued execution. Such a pipeline stall is also referred to as a pipeline
bubble. There are three types of hazards: resource, data, and control.

resource hazards A resource hazard occurs when two (or more) instructions that are already in the
pipeline need the same resource. The result is that the instructions must be executed in serial rather
than parallel for a portion of the pipeline. A resource hazard is sometime referred to as a structural
hazard.
18 Computer Organisation and Operating System
kn Unäv

Let us consider a simple example of a resource hazard. Assume a simplified five stage pipeline, in
which each stage takes one clock cycle. Figure 1.14a shows the ideal case, in which a new instruction
enters the pipeline each clock cycle.

Fig 1.14
Now assume that main memory has a single port and that all instruction fetches and data reads and
writes must be performed one at a time. Further, ignore the cache. In this case, an operand read to or
write from memory cannot be performed in parallel with an instruction fetch. This is illustrated in
Figure 1.14b, which assumes that the source operand for instruction I1 is in memory, rather than a
register. Therefore, the fetch instruction stage of the pipeline must idle for one cycle before beginning
the instruction fetch for instruction I3. The figure assumes that all other operands are in registers.

data hazards A data hazard occurs when there is a conflict in the access of an operand location. In
general terms, we can state the hazard in this form: Two instructions in a program are to be executed
in sequence and both access a particular memory or register operand. If the two instructions are
executed in strict sequence, no problem occurs. However, if the instructions are executed in a pipeline,
then it is possible for the operand value to be updated in such a way as to produce a different result
than would occur with strict sequential execution. In other words, the program produces an incorrect
result because of the use of pipelining.

As an example, consider the following x86 machine instruction sequence:


ADD EAX, EBX /* EAX = EAX + EBX
SUB ECX, EAX /* ECX = ECX – EAX

Computer Organisation and Operating System 19


kn Unäv

The first instruction adds the contents of the 32-bit registers EAX and EBX and stores the result in
EAX. The second instruction subtracts the contents of EAX from ECX and stores the result in ECX.
Figure 1.15 shows the pipeline behaviour. The ADD instruction does not update register EAX until
the end of stage 5, which occurs at clock cycle 5. But the SUB instruction needs that value at the
beginning of its stage 2, which occurs at clock cycle 4. To maintain correct operation, the pipeline
must stall for two clocks cycles. Thus, in the absence of special hardware and specific avoidance
algorithms, such a data hazard results in inefficient pipeline usage.

There are three types of data hazards;

• Read after write (RAW), or true dependency: An instruction modifies a register or memory
location and a succeeding instruction reads the data in that memory or register location. A
hazard occurs if the read takes place before the
• write operation is complete.
• Write after read (RAW), or antidependency: An instruction reads a register or memory
location and a succeeding instruction writes to the location. A hazard occurs if the write
operation completes before the read operation takes place.
• Write after write (RAW), or output dependency: Two instructions both write to the same
location. A hazard occurs if the write operations take place in the reverse order of the intended
sequence. The example of Figure 12.16 is a RAW hazard.

Fig 1.15
control hazards A control hazard, also known as a branch hazard, occurs when the pipeline makes
the wrong decision on a branch prediction and therefore brings instructions into the pipeline that must
subsequently be discarded. We discuss approaches to dealing with control hazards next.

Dealing with Branches


One of the major problems in designing an instruction pipeline is assuring a steady flow of instructions
to the initial stages of the pipeline. The primary impediment, as we have seen, is the conditional

20 Computer Organisation and Operating System


kn Unäv

branch instruction. Until the instruction is actually executed, it is impossible to determine whether
the branch will be taken or not. A variety of approaches have been taken for dealing with conditional
branches:

• Multiple streams
• Prefetch branch target
• Loop buffer
• Branch prediction
• Delayed branch

parallel computing
In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to
solve a computational problem:
• A problem is broken into discrete parts that can be solved concurrently
• Each part is further broken down to a series of instructions
• Instructions from each part execute simultaneously on different processors
• An overall control/coordination mechanism is employed

Fig 1.16

multiple processor organizations


types of parallel processor systems
• Single instruction, single data (SISD) stream: A single processor executes a single instruction
stream to operate on data stored in a single memory. Uniprocessors fall into this category.

Computer Organisation and Operating System 21


kn Unäv

• Single instruction, multiple data (SIMD) stream: A single machine instruction controls the
simultaneous execution of a number of processing elements on a lockstep basis. Each processing
element has an associated data memory, so that each instruction is executed on a different set of data
by the different processors. Vector and array processors fall into this category.

Multiple instruction, single data (MISD) stream: A sequence of data is transmitted to a set of
processors, each of which executes a different instruction sequence. This structure is not commercially
implemented.

Multiple instruction, multiple data (MIMD) stream: A set of processors simultaneously execute
different instruction sequences on different data sets. SMPs, clusters, and NUMA systems fit into this
category.

With the MIMD organization, the processors are general purpose; each is able to process all of
the instructions necessary to perform the appropriate data transformation. MIMDs can be further
subdivided by the means in which the processors communicate (Figure 1.17).

Fig 1.17

symmetric multiprocessors
The term SMP refers to a computer hardware architecture and also to the operating system behaviour
that reflects that architecture. An SMP can be defined as a standalone computer system with the
following characteristics:
1. There are two or more similar processors of comparable capability.
2. These processors share the same main memory and I/O facilities and are interconnected by a bus or
other internal connection scheme, such that memory access time is approximately the same for each
processor.

22 Computer Organisation and Operating System


kn Unäv

3. All processors share access to I/O devices, either through the same channels or through different
channels that provide paths to the same device.
4. All processors can perform the same functions (hence the term symmetric).
5. The system is controlled by an integrated operating system that provides interaction between
processors and their programs at the job, task, file, and data element levels.

organization
Figure 1.18 depicts in general terms the organization of a multiprocessor system. There are two
or more processors. Each processor is self-contained, including a control unit, ALU, registers, and,
typically, one or more levels of cache. Each processor has access to a shared main memory and the
I/O devices through some form of interconnection mechanism. The processors can communicate with
each other through memory. It may also be possible for processors to exchange signals directly. The
memory is often organized so that multiple simultaneous accesses to separate blocks of memory are
possible. In some configurations, each processor may also have its own private main memory and I/O
channels in addition to the shared resources.

Fig 1.18
The most common organization for personal computers, workstations, and servers is the time-shared
bus. The time-shared bus is the simplest mechanism for constructing a multiprocessor system (Figure
1.19). The structure and interfaces are basically the same as for a single-processor system that uses a
bus interconnection. The bus consists of control, address, and data lines.To facilitate DMA transfers
from I/O processors, the following features are provided:

• addressing: It must be possible to distinguish modules on the bus to determine the source
and destination of data.
• arbitration: Any I/O module can temporarily function as “master.” A mechanism is provided
to arbitrate competing requests for bus control, using some sort of priority scheme.
• time-sharing: When one module is controlling the bus, other modules are locked out and
must, if necessary, suspend operation until bus access is achieved.
Computer Organisation and Operating System 23
kn Unäv

These uniprocessor features are directly usable in an SMP organization. In this latter case, there are
now multiple processors as well as multiple I/O processors all attempting to gain access to one or
more memory modules via the bus. The bus organization has several attractive features:

• simplicity: This is the simplest approach to multiprocessor organization. The physical


interface and the addressing, arbitration, and time-sharing logic of each processor remain the
same as in a single-processor system.
• flexibility: It is generally easy to expand the system by attaching more processors to the bus.
• reliability: The bus is essentially a passive medium, and the failure of any attached device
should not cause failure of the whole system.

Fig 1.19

multicore organization
The main variables in a multicore organization are as follows:

• The number of core processors on the chip


• The number of levels of cache memory
• The amount of cache memory that is shared

24 Computer Organisation and Operating System


kn Unäv

Fig 1.20
Figure 1.20 shows four general organizations for multicore systems. Figure 1.20a is an organization
found in some of the earlier multicore computer chips and is still seen in embedded chips. In this
organization, the only on-chip cache is L1 cache, with each core having its own dedicated L1 cache.
Almost invariably, the L1 cache is divided into instruction and data caches. An example of this
organization is the ARM11 MP Core.
The organization of Figure 1.20b is also one in which there is no on-chip cache sharing. In this,
there is enough area available on the chip to allow for L2 cache. An example of this organization is
the AMD Opteron.

Figure 1.20c shows a similar allocation of chip space to memory, but with the use of a shared L2
cache. The Intel Core Duo has this organization. Finally, as the amount of cache memory available
on the chip continues to grow, performance considerations dictate splitting off a separate, shared L3
cache, with dedicated L1 and L2 caches for each core processor. The Intel Core i7 is an example of
this organization.

stack and queue


stack
A Stack is a linear data structure that follows the LIFO (Last-In-First-Out) principle. Stack has one
end, whereas the Queue has two ends (front and rear). It contains only one pointer top pointer pointing
to the topmost element of the stack. Whenever an element is added in the stack, it is added on the
top of the stack, and the element can be deleted only from the stack. In other words, A stack can be
defined as a container in which insertion and deletion can be done from the one end known as the top
of the stack.
Computer Organisation and Operating System 25
kn Unäv

Fig 1.21
The following are some common operations implemented on the stack:

• push(): When we insert an element in a stack then the operation is known as a push. If the
stack is full then the overflow condition occurs.

• pop(): When we delete an element from the stack, the operation is known as a pop. If the
stack is empty means that no element exists in the stack, this state is known as an under-
flow state.

• isempty(): It determines whether the stack is empty or not.

• isfull(): It determines whether the stack is full or not.'

• peek(): It returns the element at the given position.

• count(): It returns the total number of elements available in a stack.

• change(): It changes the element at the given position.

• display(): It prints all the elements available in the stack.

Queue

1. A queue can be defined as an ordered list which enables insert operations to be performed at one
end called REAR and delete operations to be performed at another end called FRONT.
2. Queue is referred to be as First In First Out list.
3. For example, people waiting in line for a rail ticket form a queue.
26 Computer Organisation and Operating System
kn Unäv

Fig 1.22

Booting
Booting is to load an operating system (OS) into the computer's main memory or RAM. Restarting
a computer also is called rebooting, which can be "hard", e.g., after electrical power to the CPU is
switched from off to on, or "soft", where the power is not cut. On some systems, a soft boot may
optionally clear RAM to zero. Hard and soft booting can be initiated by hardware such as a button
press or a software command. Booting is complete when the operative runtime system, typically the
operating system and some applications, is attained.

motherboard components

Fig 1.23

Computer Organisation and Operating System 27


kn Unäv

• Expansion slots (PCI Express, PCI, and AGP)


• 3-pin case fan connectors
• Back pane connectors
• Heat sink
• 4-pin (P4) power connector
• Inductor
• Capacitor
• CPU socket
• Northbridge
• Screw hole
• Memory slot
• Super I/O
• ATA / IDE disk drive primary connection
• 24-pin ATX power supply connector
• Serial ATA connections
• Coin cell battery (CMOS backup battery)
• RAID
• System panel connectors
• FWH
• Southbridge
• Serial port connector
• USB headers
• Jumpers
• Integrated circuit
• 1394 headers
• SPDIF
• CD-IN

memory slots
A memory slot, memory socket, or RAM slot allows RAM (computer memory) to be inserted into
the computer. Most motherboards have two to four memory slots, which determine the type of RAM
used with the computer. The most common RAM types are SDRAM and DDR for desktop computers
and SODIMM for laptop computers, each having various types and speeds. The picture below is an
example of what memory slots may look like inside a desktop computer. In this picture, there are
three open and available slots for three memory sticks.
28 Computer Organisation and Operating System
kn Unäv

Fig 1.24

expansion slots
Alternatively known as a bus slot or expansion port, an expansion slot is a connection or port inside a
computer on the motherboard or riser card. It provides an installation point for a hardware expansion
card to be connected. For example, if you wanted to install a new video card in the computer, you'd
purchase a video expansion card and install that card into the compatible expansion slot.







• .

addon cards
Alternatively called an adapter card, Expansion card, expansion board, internal card, interface
adapter, or card, an expansion card is a PCB that fits into an expansion slot on the motherboard. An
expansion card is an internal card that gives a computer additional capability, such as enhanced video
performance via a graphics card.
Types of expansion cards in a computer
• Interface card (ATA, Bluetooth, EIDE, FireWire, IDE, parallel, RAID, SCSI, serial, and USB).
• MIDI
• Modem
• MPEG decoder
• Network card

Computer Organisation and Operating System 29


kn Unäv

• Sound card
• Tuner card
• Video capture card
• Video card
chipsets
A chipset is a set of electronic components on an integrated circuit that manages the transfer of data
between the CPU, RAM, storage, and I/O devices. The first chipset, the 82C206, was introduced
in 1986 by Chips and Technologies.
north bridge chipsets
Alternatively called the pac (PCI/AGP Controller) and nb, the northbridge is an integrated
circuit responsible for communications between the CPU interface, AGP, and the memory. Unlike
the southbridge, the northbridge is directly connected to these components. It acts as a "bridge"
for the southbridge chip to communicate with the CPU, RAM, and graphics controller. Today, the
northbridge is a single-chip that is north of the PCI bus, however, early computers may have had up to
three separate chips that made up the northbridge. It is common for the northbridge and southbridge
to have a heat sink. Also, the northbridge is usually slightly larger than the southbridge.
south bridge chipsets
Southbridge is a reference to a chipset on a PC motherboard. It is a group of microchips designed for
a single function and manufactured as a single unit. This chipset controls or manages input and output
(I/O). Examples of I/O interface connections controlled by southbridge are USB, serial, IDE and ISA.
These are the slower capabilities of the motherboard. It is located on the northbridge of the PCI bus
and is not directly connected to the CPU, but connected to the CPU through the northbridge.

motherboard form factor


When referring to computer hardware, a form factor is a specification for its layout and physical
dimensions. Form factors help prevent incompatibilities between multiple hardware manufacturers.
As computers advanced, so have motherboards. Below is a listing of the various motherboard form
factors and links to additional information about each of them. Today, the most common motherboard
form factor for desktop computers is the ATX form factor.
• AT
• ATX
• Baby AT
• BTX
• DTX
• ITX

30 Computer Organisation and Operating System


kn Unäv

• LPX
• Full AT
• Full ATX
• Micro-ITX
• microATX
• Mobile-ITX
• NLX
• Pico-ITX
The at form factor was also designed by IBM. AT stands for Advanced Technology. It was a much smaller
design and looked more like the motherboards we are used to seeing. It was the common form factor for
computers in the 1980s. This motherboard is also known as the Full-AT. The AT motherboard was about 12-
by-38 inches, which means it won’t fit in a mini-desktop.
Intel released the atX motherboards in the mid-1990s as an improvement on the AT motherboards
that were used previously. Motherboards of this type differ from their AT counterparts in that they
allow the interchangeability of the connected components. Additionally, because the dimensions of
this motherboard are smaller than those of AT motherboards, there is sufficient room for the drive
bays. The connector system of the board was also improved. On the back plates of AT motherboards,
additional slots were provided for various add-ons.
This motherboard was another flavor of the AT motherboard. It was called “baby” because it was
smaller than the full-sized AT motherboard and measured 8.5-by-13 inches, but the size could vary
slightly between manufacturers. The smaller size of this motherboard made it easier for technicians
to work on it because there was more room inside the case. Other than that, it had similar features to
the standard AT motherboard.
The micro-atX board is similar to the ATX board. The only real difference is its size. It is 9.6-by-
9.6 inches instead of 12-by-9.6 inches. This board was made for small computer cases. Because it is
smaller, it has less expansion and memory slots than the ATX board. Just because they are smaller
doesn’t mean they are less capable of providing computing power. These boards are used in even
some gaming computers.
Low profile extension motherboards, or LpX motherboards, were created after the AT boards in the
1990s. The main difference between these boards and previous ones is that the input and output ports
are located at the back of the system. AT boards also adopted this concept in their newer versions as
a result of its success. Additional slots were also placed with the use of a riser card. However, these
riser cards also posed the issue of insufficient airflow.
The nLX board is an upgraded version of the LPX motherboard. It was created in the 1990s to
provide support for larger cases, cards, and devices. NLX stands for New Low Profile Extended. It
supported the Pentium II processor, AGP, DIMM memory, and USB.
BTX stands for Balanced Technology extended.

Computer Organisation and Operating System 31


kn Unäv

cmos
Alternatively known as an RTC (real-time clock), NVRAM (non-volatile RAM), or cmos
ram, cmos is short for complementary metal-oxide semiconductor. CMOS is an onboard,
battery-powered semiconductor chip inside computers that stores information. This information
ranges from the system time and date to your computer's hardware settings. The picture shows an
example of the most common CMOS coin cell battery (Panasonic CR 2032 3V) used to power the
CMOS memory.

Fig 1.25

smps
Short for switched-mode power supply, smps is a power supply that uses a switching regulator to
control and stabilize the output voltage by switching the load current on and off. These power supplies
offer a greater power conversion and reduce the overall power loss.

Fig 1.26

32 Computer Organisation and Operating System


kn Unäv

Parts found inside a power supply

Below is a list of parts inside a power supply.


A rectifier that converts AC (alternating current) into DC.
A filter that smooths out the DC (direct current) coming from a rectifier.
A transformer that controls the incoming voltage by stepping it up or down.
A voltage regulator that controls the DC output, allowing the correct amount of
power, volts or watts, to be supplied to the computer hardware.

The order that these internal power supply components function is as follows.
1. Transformer
2. Rectifier
3. Filter
4. Voltage Regulator

Computer Organisation and Operating System 33


kn Unäv

pin no Wire color output

1 Orange +3.3 V
2 Orange +3.3 V
3 Black GND
4 Red +5 V
5 Black GND
6 Red +5 V
7 Black GND
8 Gray
9 Purple +5V Standby
10 Yellow +12 V
11 Yellow +12 V
12 Orange +3.3 V
13 Orange +3.3 V
14 Blue -12 V
15 Black GND
16 Green
17 Black GND
18 Black GND
19 Black GND
20 White -5 V
21 Red +5 V
22 Red +5 V
23 Red +5 V
24 Black GND

references

1. Computer Organizationand Architecture Designing for Performance, Eighth Edition


(William Stallings)
2. https://www.computerhope.com

34 Computer Organisation and Operating System


kn Unäv

module ii
memory organisation
Define memory
Computer memory is any physical device capable of storing information temporarily,
like RAM (random access memory), or permanently, like ROM (read-only memory). Memory devices
utilize integrated circuits and are used by operating systems, software, and hardware.
Volatile memory
Volatile memory is a type of storage whose contents are erased when the system's power is turned off
or interrupted. An example of volatile memory is RAM (random access memory).
non-Volatile memory
nV or non-volatile memory is memory or storage that's saved regardless if the computer has power.
It's also called long term storage, persistent storage, or permanent storage. An example of non-
volatile memory and storage is a computer hard drive, flash memory, and ROM. Data stored on a
hard drive remains there regardless if the drive has power, making it the best place to store your files.
erasable and non erasable memory
Non-erasable memory which cannot be erased after the manufactured like ROM because at the time
of manufactured ROM are programmed.
The content of Erasable memory can be removed or replaced by another content like EEPROM

sequential access and random access memory


In Sequential Access Memory, the data must be accessed in a predetermined order via read-write
circuitry that is shared by different storage locations. If the storage locations can be accessed in any
order and access time is independent of the location being accessed. This access method known
as random access. The memory that provides such access is known as Random Access Memory
(RAM)

sequential access memory


• Use sequential access method
• Memory access time is dependent on the position of storage location
• Memory access time is more.
• It is a non-volatile memory.
• It is cheaper than random access memories
Example: Magnetic tape
Computer Organisation and Operating System 35
kn Unäv

random access memory


• It uses random access method
• Memory access time is independent of storage location being accessed
• Each storage location in the memory has an unique address and it can be accessed independently
of the other locations.
• Memory access time is less.
• It's a volatile memory.
• It is expensive than Sequential Access memories
• Example: Semi-conductor memories

primary memory and secondary memory


primary memory
Primary memory is the main memory of computer. Primary memory is categorized into two main
types: Random access memory (RAM) and read only memory (ROM). RAM is used for the temporary
storage of input data, output data and intermediate results. The input data entered into the computer
using the input device, is stored in RAM for processing. After processing, the output data is stored in
RAM before being sent to the output device. Any intermediate results generated during the processing
of program are also stored in RAM. Unlike RAM, the data once stored in ROM either cannot be
changed or can only be changed using some special operations. Therefore, ROM is used to store the
data that does not require a change.
secondary memory
The key features of secondary memory storage devices are:
1. Very high storage capacity.
2. Permanent storage (non-volatile), unless erased by user.
3. Relatively slower access.
4. Stores data and instructions that are not currently being used by CPU but may be required later for
processing.
5. Cheapest among all memory

ram and rom


ram (random access memory)
Alternatively called main memory, primary memory, or system memory, ram (random-access
memory) is a hardware device that allows information to be stored and retrieved on a computer.

36 Computer Organisation and Operating System


kn Unäv

Fig 2.1

The Word “RAM” stands for “random access memory” or may also refer to short term memory.
It is called “random” because you can read store data randomly at any time and from any physical
location. It is a temporal storage memory. RAM is volatile that only retains all the data as long as the
computer powered. It is the fastest type of memory. RAM stores the currently processed data from the
CPU and sends them to the graphics unit.

There are generally two broad subcategories of RAM 1. Static RAM (SRAM) 2, Dynamic Random
Access Memory (DRAM)

rom (read only memory)


ROM is the long-term internal memory. ROM is “Non-Volatile Memory” that retains data without the
flow of electricity. ROM is an essential chip with permanently written data or programs. It is similar
to the RAM that is accessed by the CPU. ROM comes with pre-written by the computer manufacturer
to hold the instructions for booting-up the computer,

simm and dimm


Short for single inline memory module

Fig 2.2

Computer Organisation and Operating System 37


kn Unäv

A SIMM is only found in older computers (early 90's). They have replaced by DIMMs. There are two
variants of the SIMM, one with 30 pins and the other with 72 pins.

30 pins SIMM contain an address width of 8 bits and 1MB or 4 MB of RAM. Therefore, the data it
can transfer from the memory bus at a time is 8 bits. Later hardware of 30 pins SIMM contains parity
bits for the error detection, making the address width of 9 bits. To ensure the proper installation of the
SIMM, it has a notch on the bottom left.
72 pins SIMM can have an address width of 32 bits or 36 bits, including parity bits. Each byte is
allotted parity bits (for 32 data, bits 4 bits are for parity). The RAM amount can be 4, 8, 16, 32, or 64
MB. It is notched at the side and centre of the module.

dimm simm
DIMM is a short form of Dual In-Line SIMM is an abbreviation of the Single Inline
Memory Module. Memory Module.
DIMMs provide a 64-bit channel. SIMMs provides a 32-bit channel for transmitting the
data.
The implementation of DIMM is good. The implementation of SIMM is not good.
Two notches are present in DIMMs. A single notch is present in SIMMS.
DIMM consumes 3.3 volts of power. SIMM consumes 5 volts of power.
Modern Pentium computers use this This memory module is used by both 486 CPU and
memory module. early Pentium computers.
DIMMs are installed one at a time. SIMMs are installed in pairs at a time.
The storage provided by DIMM is 32 The storage provided by SIMM is 4 MB to 64MB
MB to 1 GB.

memory Hierarchy and its characteristics


The memory is characterized based on two key factors: capacity and access time.

• Capacity is the amount of information (in bits) that a memory can store.
• Access time is the time interval between the read/ write request and the availability of data.
The lesser the access time, the faster is the speed of memory.

Ideally, we want the memory with fastest speed and largest capacity. However, the cost of fast memory
is very high. The computer uses a hierarchy of memory that is organized in a manner to enable the
fastest speed and largest capacity of memory. The hierarchy of the different memory types is shown
in Figure 2.3
38 Computer Organisation and Operating System
kn Unäv

A typical hierarchy is illustrated in Figure. As one goes down the hierarchy, the following occur:
a. Decreasing cost per bit
b. Increasing capacity
c. Increasing access time
d. Decreasing frequency of access of the memory by the processor

Fig 2.3

dram and sram

Fig 2.4

The basic element of a semiconductor memory is the memory cell. Although a variety of electronic
technologies are used, all semiconductor memory cells share certain properties:

• They exhibit two stable (or semistable) states, which can be used to represent binary 1 and 0.
• They are capable of being written into (at least once), to set the state.
• They are capable of being read to sense the state.

Computer Organisation and Operating System 39


kn Unäv

The distinguishing characteristic of RAM is that it is volatile. A RAM must be provided with a
constant power supply. If the power is interrupted, then the data are lost. Thus, RAM can be used only
as temporary storage. The two traditional forms of RAM used in computers are DRAM and SRAM.

dynamic ram
RAM technology is divided into two technologies: dynamic and static.
A dynamic RAM (DRAM) is made with cells that store data as charge on capacitors. The presence
or absence of charge in a capacitor is interpreted as a binary 1 or 0. Because capacitors have a
natural tendency to discharge, dynamic RAMs require periodic charge refreshing to maintain data
storage. The term dynamic refers to this tendency of the stored charge to leak away, even with power
continuously applied.

Figure 2.5a is a typical DRAM structure for an individual cell that stores 1 bit. The address line is
activated when the bit value from this cell is to be read or written. The transistor acts as a switch that
is closed (allowing current to flow) if a voltage is applied to the address line and open (no current
flows) if no voltage is present on the address line.

For the write operation, a voltage signal is applied to the bit line; a high voltage represents 1, and
a low voltage represents 0. A signal is then applied to the address line, allowing a charge to be
transferred to the capacitor.

For the read operation, when the address line is selected, the transistor turns on and the charge stored
on the capacitor is fed out onto a bit line and to a sense amplifier. The sense amplifier compares the
capacitor voltage to a reference value and determines if the cell contains a logic 1 or a logic 0. The
readout from the cell discharges the capacitor, which must be restored to complete the operation.
40 Computer Organisation and Operating System
kn Unäv

Although the DRAM cell is used to store a single bit (0 or 1), it is essentially an analog device. The
capacitor can store any charge value within a range; a threshold value determines whether the charge
is interpreted as 1 or 0.

Fig 2.5

static ram In contrast, a static RAM (SRAM) is a digital device that uses the same logic elements
used in the processor. In a SRAM, binary values are stored using traditional flip-flop logic-gate
configurations. A static RAM will hold its data as long as power is supplied to it.

Figure 2.5b is a typical SRAM structure for an individual cell. Four transistors (T1, T2, T3, T4) are
cross connected in an arrangement that produces a stable logic state. In logic state 1, point C1 is high
and point C2 is low; in this state, T1 and T4 are off and T2 and T3 are on.1 In logic state 0, point C1 is
low and point C2 is high; in this state, T1 and T4 are on and T2 and T3 are off. Both states are stable
as long as the direct current (dc) voltage is applied. Unlike the DRAM, no refresh is needed to retain
data.

As in the DRAM, the SRAM address line is used to open or close a switch. The address line controls
two transistors (T5 and T6). When a signal is applied to this line, the two transistors are switched
on, allowing a read or write operation. For a write operation, the desired bit value is applied to line
B, while its complement is applied to line. This forces the four transistors (T1, T2, T3, T4) into the
proper state. For a read operation, the bit value is read from line B.

Different types of ROM


As the name suggests, a read-only memory (ROM) contains a permanent pattern of data that cannot
be changed. A ROM is nonvolatile; that is, no power source is required to maintain the bit values in
memory. While it is possible to read a ROM, it is not possible to write new data into it.

Computer Organisation and Operating System 41


kn Unäv

prom or programmable rom (programmable read-only memory) is a computer memory chip


that can be programmed once after it is created. Once the PROM is programmed, the information
written is permanent and cannot be erased or deleted.

When the PROM is created, all bits read as "1." During the programming, any bit needing to be
changed to a "0" is etched or burned into the chip using a gang programmer.

The optically erasable programmable read-only memory (EPROM) is read and written
electrically, as with PROM. However, before a write operation, all the storage cells must be erased to
the same initial state by exposure of the packaged chip to ultraviolet radiation. Erasure is performed
by shining an intense ultraviolet light through a window that is designed into the memory chip. This
erasure process can be performed repeatedly.

Short for electrically erasable programmable read-only memory, eeprom is a PROM that
can be erased and reprogrammed using an electrical charge. EEPROM was a replacement
for PROM and EPROM chips and is used for later computer's BIOS.

flash memory
Another form of semiconductor memory is flash memory (so named because of the speed with
which it can be reprogrammed). First introduced in the mid-1980s, flash memory is intermediate
between EPROM and EEPROM in both cost and functionality. Like EEPROM, flash memory uses
an electrical erasing technology. An entire flash memory can be erased in one or a few seconds, which
is much faster than EPROM. In addition, it is possible to erase just blocks of memory rather than
an entire chip. Flash memory gets its name because the microchip is organized so that a section of
memory cells is erased in a single action or “flash.” However, flash memory does not provide byte-
level erasure. Like EPROM, flash memory uses only one transistor per bit, and so achieves the high
density.
associative memory
associative memory A memory whose storage locations are identified by their contents, or by a
part of their contents, rather than by their names or positions. Traditional memory stores data at
a specific address and "recalls" that data later if the address is specified. Instead of an address,
associative memory recalls data if a small portion of the data itself is specified.

Cache memory is intended to give memory speed approaching that of the fastest memories available,
and at the same time provide a large memory size at the price of less expensive types of semiconductor
memories. The concept is illustrated in Figure. There is a relatively large and slow main memory
together with a smaller, faster cache memory. The cache contains a copy of portions of main memory.
When the processor attempts to read a word of memory, a check is made to determine if the word is

42 Computer Organisation and Operating System


kn Unäv

in the cache. If so, the word is delivered to the processor. If not, a block of main memory, consisting
of some fixed number of words, is read into the cache and then the word is delivered to the processor.
Because of the phenomenon of locality of reference, when a block of data is fetched into the cache to
satisfy a single memory reference, it is likely that there will be future references to that same memory
location or to other words in the block.

cache memory
Cache memory is intended to give memory speed approaching that of the fastest memories available,
and at the same time provide a large memory size at the price of less expensive types of semiconductor
memories. The concept is illustrated in Figure a. There is a relatively large and slow main memory
together with a smaller, faster cache memory. The cache contains a copy of portions of main memory.
When the processor attempts to read a word of memory, a check is made to determine if the word is
in the cache. If so, the word is delivered to the processor. If not, a block of main memory, consisting
of some fixed number of words, is read into the cache and then the word is delivered to the processor.
Because of the phenomenon of locality of reference, when a block of data is fetched into the cache to
satisfy a single memory reference, it is likely that there will be future references to that same memory
location or to other words in the block.

Figure 2.6b depicts the use of multiple levels of cache. The L2 cache is slower and typically, larger
than the L1 cache, and the L3 cache is slower and typically larger than the L2 cache.

Fig 2.6

Virtual memory
Virtual memory is a method of using the computer hard drive to provide extra memory for the
computer. Segments of memory are stored on the hard drive known as pages. When a segment of
memory is requested that is stored in virtual memory, it is loaded into the actual memory address.

Computer Organisation and Operating System 43


kn Unäv

interleaved memory
Main memory is composed of a collection of DRAM memory chips. A number of chips can be
grouped together to form a memory bank. It is possible to organize the memory banks in a way known
as interleaved memory. Each bank is independently able to service a memory read or write request, so
that a system with K banks can service K requests simultaneously, increasing memory read or write
rates by a factor of K. If consecutive words of memory are stored in different banks, then the transfer
of a block of memory is speeded up.

magnetic tape
Tape systems use the same reading and recording techniques as disk systems. The medium is flexible
polyester (similar to that used in some clothing) tape coated with magnetizable material. The coating
may consist of particles of pure metal in special binders or vapor-plated metal films. The tape and the
tape drive are analogous to a home tape recorder system. Tape widths vary from 0.38 cm (0.15 inch)
to 1.27 cm (0.5 inch).

Data on the tape are structured as a number of parallel tracks running lengthwise. Earlier tape systems
typically used nine tracks. This made it possible to store data one byte at a time, with an additional
parity bit as the ninth track. This was followed by tape systems using 18 or 36 tracks, corresponding to
a digital word or double word. The recording of data in this form is referred to as parallel recording.
Most modern systems instead use serial recording, in which data are laid out as a sequence of
bits along each track, as is done with magnetic disks. As with the disk, data are read and written
in contiguous blocks, called physical records, on a tape. Blocks on the tape are separated by gaps
referred to as interrecord gaps. As with the disk, the tape is formatted to assist in locating physical
records.

The typical recording technique used in serial tapes is referred to as serpentine recording. In this
technique, when data are being recorded, the first set of bits is recorded along the whole length of the
tape. When the end of the tape is reached, the heads are repositioned to record a new track, and the
tape is again recorded on its whole length, this time in the opposite direction. That process continues,
back and forth, until the tape is full (Figure 2.7).

Fig 2.7
Magnetic tape was the first kind of secondary memory. It is still widely used as the lowest-cost,
slowest-speed member of the memory hierarchy.
44 Computer Organisation and Operating System
kn Unäv

Hard disk drive


A hard disk drive (sometimes abbreviated as a hard drive, Hd, or Hdd) is a non-volatile data storage
device. It is usually installed internally in a computer, attached directly to the disk controller of the
computer's motherboard. It contains one or more platters, housed inside of an air-sealed casing. Data
is written to the platters using a magnetic head, which moves rapidly over them as they spin. Data is
written to the platters using a magnetic head, which moves rapidly over them as they spin.

Fig 2.8
Internal hard disks reside in a drive bay, connected to the motherboard using an ATA, SCSI,
or SATA cable. They are powered by a connection to the computer's PSU (power supply unit).

Fig 2.9
As shown in the picture above, the desktop hard drive consists of the following components: the head
actuator, read/write actuator arm, read/write head, spindle, and platter. On the back of a hard drive is
a circuit board called the disk controller or interface board. This circuit is what allows the hard drive
to communicate with the computer.

Computer Organisation and Operating System 45


kn Unäv

actuator
An actuator is an electronic device controlled by a motor that moves the hard drive head arm. In
the past, the actuator within a hard drive was controlled by a stepper motor. However, today the
actuator is often controlled by a servo motor. Below is a graphic example of the inside of a hard drive
including the head actuator.

access arm
Alternatively called the read/write head arm, head arm, or actuator arm, the access arm is in every
mechanical hard drive with a similar to the arm of a record player. As the platter spins, the access arm
adjusts along the axis point to move the read/write heads to read, write, or delete information. The
image shows the inside of a hard drive with the actuator and actuator arm.

Read/ write head


A read/write head or rW head is a device on the arm of a hard drive. It reads and writes data from
the hard drive's disk platter. Hard drives usually have one read/write head for each platter side that
resides on the platter while idle. As the platter spins, an air cushion develops making the head float
3 to 20-millionths of an inch above the platter. When data needs to be read or written, the read/write
arm is moved and controlled by a motor in the actuator. Below is an example of the inside of a desktop
and laptop hard disk drive.

spindle
spindle is what holds the hard drive's platters in place. With a traditional hard drive, there is the need
to have multiple platters. The spindle holds these platters in a fixed position with enough space for the
read/write arms to get to the data on the disks.

platter
A platter is one or more aluminium, glass, or ceramic disk coated in a magnetic media is located
within a hard drive to store all your computer's data permanently. When the computer is turned on,
these platters begin to rotate at so many RPM (rotations per minute). This rate varies depending on
the model of hard drive you have; an example of how fast a hard drive may spin is 7200 RPM. As the
disk platters are rotating, the read/write head accesses information on one of the platters. To help store
and retrieve the data from the platter, data is stored in tracks, sectors, and cylinders on each platter.
Each disk consists of platters, rings on each side of each platter called tracks, and sections within each
track called sectors. A sector is the smallest physical storage unit on a disk, almost always 512 bytes
in size.

46 Computer Organisation and Operating System


kn Unäv

Fig 2.11

ssd
Short for solid-state drive, an ssd is a storage medium that uses non-volatile memory to hold and
access data. Unlike a hard drive, an SSD has no moving parts, which gives it advantages, such as
faster access time, noiseless operation, higher reliability, and lower power consumption.

optical disc
Alternatively called a disc drive, optical media, optical storage, and optical disc drive, an optical
disc is any media read using a laser assembly. The most common types of optical media are Blu-
ray, CDs, and DVDs. Computers can read and write to CDs and DVDs using a CD writer or DVD
writer drive, and a Blu-ray is read with a Blu-ray drive. Drives such as a CD-R and DVD-R drives
that read and write information to discs are known as MO (magneto-optic).

Computer Organisation and Operating System 47


kn Unäv

compact disc
Abbreviated as cd, a compact disc is a flat, round, optical storage medium invented by James
Russell. The first CD was created at a Philips factory in Germany on August 17, 1982. The picture
is an example of the bottom of a standard compact disc and is the side the disc player reads. The
opposite side of the disc has a label to help indicate what is on the disc.

Fig 2.12

cd-rom
Both the audio CD and the CD-ROM (compact disk read-only memory) share a similar technology.
The main difference is that CD-ROM players are more rugged and have error correction devices to
ensure that data are properly transferred from disk to computer. Both types of disks are made the
same way. The disk is formed from a resin, such as polycarbonate. Digitally recorded information
(either music or computer data) is imprinted as a series of microscopic pits on the surface of the
polycarbonate. This is done, first of all, with a finely focused, high-intensity laser to create a master
disk. The master is used, in turn, to make a die to stamp out copies onto polycarbonate. The pitted
surface is then coated with a highly reflective surface, usually aluminum or gold. This shiny surface is
protected against dust and scratches by a top coat of clear acrylic. Finally, a label can be silkscreened
onto the acrylic.

dVd
The DVD’s greater capacity is due to three differences from CDs

Fig 2.13

48 Computer Organisation and Operating System


kn Unäv

1. Bits are packed more closely on a DVD. The spacing between loops of a spiral on a CD is 1.6 m
and the minimum distance between pits along the spiral is 0.834 m. The DVD uses a laser with
shorter wavelength and achieves a loop spacing of 0.74 m and a minimum distance between pits
of 0.4 m. The result of these two improvements is about a seven-fold increase in capacity, to about
4.7GB.
2. The DVD employs a second layer of pits and lands on top of the first layer. A dual layer DVD has
a semi reflective layer on top of the reflective layer, and by adjusting focus, the lasers in DVD
drives can read each layer separately. This technique almost doubles the capacity of the disk, to
about 8.5 GB. The lower reflectivity of the second layer limits its storage capacity so that a full
doubling is not achieved.
3. The DVD-ROM can be two sided, whereas data are recorded on only one side of a CD. This
brings total capacity up to 17 GB.

Fig 2.14

High-Definition Optical Disks


High-definition optical disks are designed to store high-definition videos and to provide significantly
greater storage capacity compared to DVDs. The higher bit density is achieved by using a laser
with a shorter wavelength, in the blue-violet range. The data pits, which constitute the digital 1s and
0s, are smaller on the high-definition optical disks compared to DVD because of the shorter laser
wavelength. Two competing disk formats and technologies initially competed for market acceptance:
HD DVD and Blu-ray DVD. The Blu-ray scheme ultimately achieved market dominance. The HD
DVD scheme can store 15 GB on a single layer on a single side. Blu-ray positions the data layer on
the disk closer to the laser (shown on the right-hand side of each diagram in Figure 2.14). This enables
a tighter focus and less distortion and thus smaller pits and tracks. Blu-ray can store 25 GB on a single
layer. Three versions are available: read only (BD-ROM), recordable once (BD-R), and rerecordable
(BD-RE).

Computer Organisation and Operating System 49


kn Unäv

raid Levels
RAID (redundant array of independent disks) is a setup consisting of multiple disks for data storage.
They are linked together to prevent data loss and/or speed up performance. Having multiple disks
allows the employment of various techniques like disk striping, disk mirroring, and parity.

raid 0: striping
RAID 0, also known as a striped set or a striped volume, requires a minimum of two disks. The disks
are merged into a single large volume where data is stored evenly across the number of disks in the
array.
This process is called disk striping and involves splitting data into blocks and writing it simultaneously/
sequentially on multiple disks. Configuring the striped disks as a single partition increases performance
since multiple disks do reading and writing operations simultaneously. Therefore, RAID 0 is generally
implemented to improve speed and efficiency.

Fig 2.15
RAID 0 is the most affordable type of redundant disk configuration and is relatively easy to set up.
Still, it does not include any redundancy, fault tolerance, or party in its composition. Hence, problems
on any of the disks in the array can result in complete data loss. This is why it should only be used
for non-critical storage, such as temporary files backed up somewhere else.
advantages of raid 0
• Cost-efficient and straightforward to implement.
• Increased read and write performance.
• No overhead (total capacity use).
disadvantages of raid 0
• Doesn't provide fault tolerance or redundancy.

50 Computer Organisation and Operating System


kn Unäv

When Raid 0 Should Be Used


RAID 0 is used when performance is a priority and reliability is not. If you want to utilize your drives
to the fullest and don't mind losing data, opt for RAID 0.
On the other hand, such a configuration does not necessarily have to be unreliable. You can set up disk
striping on your system along with another RAID array that ensures data protection and redundancy
raid 1: mirroring
RAID 1 is an array consisting of at least two disks where the same data is stored on each to ensure
redundancy. The most common use of RAID 1 is setting up a mirrored pair consisting of two disks in
which the contents of the first disk is mirrored in the second. This is why such a configuration is also
called mirroring.
Unlike with RAID 0, where the focus is solely on speed and performance, the primary goal of RAID 1
is to provide redundancy. It eliminates the possibility of data loss and downtime by replacing a failed
drive with its replica.

Fig 2.16
In such a setup, the array volume is as big as the smallest disk and operates as long as one drive is
operational. Apart from reliability, mirroring enhances read performance as a request can be handled
by any of the drives in the array. On the other hand, the write performance remains the same as with
one disk and is equal to the slowest disk in the configuration.
advantages of raid 1
• Increased read performance.
• Provides redundancy and fault tolerance.
• Simple to configure and easy to use.
disadvantages of raid 1
• Uses only half of the storage capacity.
• More expensive (needs twice as many drivers).
• Requires powering down your computer to replace failed drive.
Computer Organisation and Operating System 51
kn Unäv

When Raid 1 Should Be Used


RAID 1 is used for mission-critical storage that requires a minimal risk of data loss. Accounting
systems often opt for RAID 1 as they deal with critical data and require high reliability.
It is also suitable for smaller servers with only two disks, as well as if you are searching for a simple
configuration you can easily set up (even at home).
Raid 2: Bit-Level Striping with Dedicated Hamming-Code Parity
RAID 2 is rarely used in practice today. It combines bit-level striping with error checking and
information correction. This RAID implementation requires two groups of disks – one for writing the
data and another for writing error correction codes. RAID 2 also requires a special controller for the
synchronized spinning of all disks.
Instead of data blocks, RAID 2 stripes data at the bit level across multiple disks. Additionally, it uses
the Humming error ode correction (ECC) and stores this information on the redundancy disk.

Fig 2.17
The array calculates the error code correction on the fly. While writing the data, it strips it to the data
disk and writes the code to the redundancy disk. On the other hand, while reading data from the disk,
it also reads from the redundancy disk to verify the data and make corrections if needed.
advantages of raid 2
• Reliability.
• The ability to correct stored information.
disadvantages of raid 2
• Expensive.
• Difficult to implement.
• Require entire disks for ECC.

52 Computer Organisation and Operating System


kn Unäv

When Raid 2 Should Be Used


RAID 2 is not a common practice today as most of its features are now available on modern hard
disks. Due to its cost and implementation requirements, this RAID level never became popular among
developers.
Raid 3: Bit-Level Striping with Dedicated Parity
Like RAID 2, RAID 3 is rarely used in practice. This RAID implementation utilizes bit-level striping
and a dedicated parity disk. Because of this, it requires at least three drives, where two are used for
storing data strips, and one is used for parity.
To allow synchronized spinning, RAID 3 also needs a special controller. Due to its configuration
and synchronized disk spinning, it achieves better performance rates with sequential operations than
random read/write operations.

Fig 2.18
advantages of raid 3
• Good throughput when transferring large amounts of data.
• High efficiency with sequential operations.
• Disk failure resiliency.
disadvantages of raid 3
• Not suitable for transferring small files.
• Complex to implement.
• Difficult to set up as software RAID.
When Raid 3 Should Be Used
RAID 3 is not commonly used today. Its features are beneficial to a limited number of use cases
requiring high transfer rates for long sequential reads and writes (such as video editing and production).
Raid 4: Block-Level Striping with Dedicated Parity
RAID 4 is another unpopular standard RAID level. It consists of block-level data striping across two
or more independent diss and a dedicated parity disk.

Computer Organisation and Operating System 53


kn Unäv

The implementation requires at least three disks – two for storing data strips and one dedicated for
storing parity and providing redundancy. As each disk is independent and there is no synchronized
spinning, there is no need for a controller.

Fig 1.19
RAID 4 configuration is prone to bottlenecks when storing parity bits for each data block on a single
drive. Such system bottlenecks have a large impact on system performance.
Advantages of RAID 4
• Fast read operations.
• Low storage overhead.
• Simultaneous I/O requests.
Disadvantages of RAID 4
• Bottlenecks that have big effect on overall performance.
• Slow write operations.
• Redundancy is lost if the parity disk fails.
When Raid 4 Should Be Used
Considering its configuration, RAID 4 works best with use cases requiring sequential reading and
writing data processes of huge files. Still, just like with RAID 3, in most solutions, RAID 4 has been
replaced with RAID 5.
Raid 5: Striping with Parity
RAID 5 is considered the most secure and most common RAID implementation. It combines striping
and parity to provide a fast and reliable setup. Such a configuration gives the user storage usability as
with RAID 1 and the performance efficiency of RAID 0.
This RAID level consists of at least three hard drives (and at most, 16). Data is divided into data strips
and distributed across different disks in the array. This allows for high performance rates due to fast
read data transactions which can be done simultaneously by different drives in the array.

54 Computer Organisation and Operating System


kn Unäv

Fig 2.20
Parity bits are distributed evenly on all disks after each sequence of data has been saved. This feature
ensures that you still have access to the data from parity bits in case of a failed drive. Therefore, RAID
5 provides redundancy through parity bits instead of mirroring.
Advantages of RAID 5
• High performance and capacity.
• Fast and reliable read speed.
• Tolerates single drive failure.
Disadvantages of RAID 5
• Longer rebuild time.
• Uses half of the storage capacity (due to parity).
• If more than one disk fails, data is lost.
• More complex to implement.
When Raid 5 Should Be Used
RAID 5 is often used for file and application servers because of its high efficiency and optimized
storage. Additionally, it is the best, cost-effective solution if continuous data access is a priority and/
or you require installing an operating system on the array.
Raid 6: Striping with Double Parity
RAID 6 is an array similar to RAID 5 with an addition of its double parity feature. For this reason, it
is also referred to as the double-parity RAID.
This setup requires a minimum of four drives. The setup resembles RAID 5 but includes two additional
parity blocks distributed across the disk. Therefore, it uses block-level striping to distribute the data
across the array and stores two parity blocks for each data block.

Computer Organisation and Operating System 55


kn Unäv

Fig 2.21
Block-level striping with two parity blocks allows two disk failures before any data is lost. This
means that in an event where two disks fail, RAID can still reconstruct the required data.
Its performance depends on how the array is implemented, as well as the total number of drives. Write
operations are slower compared to other configurations due to its double parity feature.
advantages of raid 6
• High fault and drive-failure tolerance.
• Storage efficiency (when more than four drives are used).
• Fast read operations.
disadvantages of raid 6
• Rebuild time can take up to 24 hours.
• Slow write performance.
• Complex to implement.
• More expensive.
When Raid 6 Should Be Used
RAID 6 is a good solution for mission-critical applications where data loss cannot be tolerated.
Therefore, it is often used for data management in defence sectors, healthcare, and banking.
raid 10: Mirroring with Striping
RAID 10 is part of a group called nested or hybrid RAID, which means it is a combination of two
different RAID levels. In the case of RAID 10, the array combines level 1 mirroring and level 0
striping. This RAID array is also known as RAID 1+0.
RAID 10 uses logical mirroring to write the same data on two or more drives to provide redundancy. If
one disk fails, there is a mirrored image of the data stored on another disk. Additionally, the array uses
block-level striping to distribute chunks of data across different drives. This improves performance
and read and write speed as the data is simultaneously accessed from multiple disks.

56 Computer Organisation and Operating System


kn Unäv

Fig 2.22
To implement such a configuration, the array requires at least four drives, as well as a disk controller.
advantages of raid 10
• High performance.
• High fault-tolerance.
• Fast read and write operations.
• Fast rebuild time.
disadvantages of raid 10
• Limited scalability.
• Costly (compared to other RAID levels).
• Uses half of the disk space capacity.
• More complicated to set up.
When Raid 10 Should Be Used
RAID 10 is often used in use cases that require storing high volumes of data, fast read and write times,
and high fault tolerance. Accordingly, this RAID level is often implemented for email servers, web
hosting servers, and databases.

cloud storage
Cloud storage is a service model in which data is transmitted and stored on remote storage systems,
where it is maintained, managed, backed up and made available to users over a network - typically,
the internet. Users generally pay for their cloud data storage on a per-consumption, monthly rate.
types of cloud storage
There are three main cloud storage options, based on different access models: public, private and
hybrid.

Computer Organisation and Operating System 57


kn Unäv

public cloud
These storage services provide a multi-tenant storage environment that is most suited for unstructured
data on a subscription basis. Data is stored in the service provider's data centers with storage data
spread across multiple regions or continents. Customers generally pay on a per-use basis, similar to
the utility payment model. In many cases, there are also transaction charges based on frequency and
the volume of data being accessed.
private cloud
A private cloud storage service is an in-house storage resource deployed as a dedicated environment
protected behind a firewall. Internally hosted private cloud storage implementations emulate some
of the features of commercial public cloud services, providing easy access and allocation of storage
resources for business users, as well as object storage protocols. Private clouds are appropriate for
users who need customization and more control over their data or who have stringent data security or
regulatory requirements.
Hybrid cloud
This cloud storage option is a mix of private cloud storage and third-party public cloud storage
services, with a layer of orchestration management to operationally integrate the two platforms.
storage in the cloud: A popular alternative to secondary storage devices is to store and manage data
on a remote server. Storage in the cloud has gained popularity with the increase in Internet access,
mobile computing devices, and high bandwidth connectivity. cloud storage is a service provided
through a network (usually the Internet), either free or fee based, to back up, maintain, and manage
data remotely. It offers convenient access to data from any place on any networked device. This
portability and ubiquitous access have made such platforms as Google Docs, Apple iCloud, and
Microsoft SkyDrive popular alternatives to secondary devices where data remains locally controlled
and managed. Many providers of cloud services offer unlimited storage capacity and the ability to
share data with other users. However, this storage service does raise security and reliability concerns.
Without a consistent Internet connection, a user will not have ready access to the data. Protection of
the data is subject to the practices of the provider. Should the cloud service go out of business, data
recovery may be difficult. Despite the convenience of storing data on a remote server, multimedia
developers should thoroughly research the reliability of the service provider before entrusting data to
the cloud.

references

1. Computer Organizationand Architecture Designing for Performance, Eighth Edition


(William Stallings)
2. https://www.computerhope.com
3. https://phoenixnap.com

58 Computer Organisation and Operating System


kn Unäv

module iii
input output organisation

interrupts and instruction cycles

When the external device becomes ready to be serviced—that is, when it is ready to accept more
data from the processor, the I/O module for that external device sends an interrupt request signal to
the processor. The processor responds by suspending operation of the current program, branching off
to a program to service that particular I/O device, known as an interrupt handler, and resuming the
original execution after the device is serviced.

Fig 3.1
From the point of view of the user program, an interrupt is just that: an interruption of the normal
sequence of execution. When the interrupt processing is completed, execution resumes (Figure
3.1). Thus, the user program does not have to contain any special code to accommodate interrupts;
the processor and the operating system are responsible for suspending the user program and then
resuming it at the same point.

To accommodate interrupts, an interrupt cycle is added to the instruction cycle, as shown in Figure
3.2. In the interrupt cycle, the processor checks to see if any interrupts have occurred, indicated by
the presence of an interrupt signal. If no interrupts are pending, the processor proceeds to the fetch
cycle and fetches the next instruction of the current program. If an interrupt is pending, the processor
does the following:
Computer Organisation and Operating System 59
kn Unäv

• It suspends execution of the current program being executed and saves its context. This means
saving the address of the next instruction to be executed (current contents of the program
counter) and any other data relevant to the processor’s current activity.
• It sets the program counter to the starting address of an interrupt handler routine.

Fig 3.2

Fig 3.3

The processor now proceeds to the fetch cycle and fetches the first instruction in the interrupt handler
program, which will service the interrupt. The interrupt handler program is generally part of the
operating system. Typically, this program determines the nature of the interrupt and performs whatever
actions are needed.

60 Computer Organisation and Operating System


kn Unäv

multiple interrupts

For example, a program may be receiving data from a communications line and printing results. The
printer will generate an interrupt every time that it completes a print operation. The communication
line controller will generate an interrupt every time a unit of data arrives. The unit could either be a
single character or a block, depending on the nature of the communications discipline.

Two approaches can be taken to dealing with multiple interrupts. The first is to disable interrupts
while an interrupt is being processed. A disabled interrupt simply means that the processor can and
will ignore that interrupt request signal. If an interrupt occurs during this time, it generally remains
pending and will be checked by the processor after the processor has enabled interrupts. Thus, when
a user program is executing and an interrupt occurs, interrupts are disabled immediately. After the
interrupt handler routine completes, interrupts are enabled before resuming the user program, and the
processor checks to see if additional interrupts have occurred. This approach is nice and simple, as
interrupts are handled in strict sequential order (Figure 3.4a).

Fig 3.4

Computer Organisation and Operating System 61


kn Unäv

A second approach is to define priorities for interrupts and to allow an interrupt of higher priority to
cause a lower-priority interrupt handler to be itself interrupted (Figure 3.4b). As an example of this
second approach, consider a system with three I/O devices: a printer, a disk, and a communications
line, with increasing priorities of 2, 4, and 5, respectively. Figure 3.14, based on an example in ,
illustrates a possible sequence. A user program begins at t = 0. At t =10, a printer interrupt occurs;
user information is placed on the system stack and execution continues at the printer interrupt service
routine (ISR). While this routine is still executing, at t =15, a communications interrupt occurs. Because
the communications line has higher priority than the printer, the interrupt is honored. The printer ISR
is interrupted, its state is pushed onto the stack, and execution continues at the communications ISR.
While this routine is executing, a disk interrupt occurs (t =20). Because this interrupt is of lower
priority, it is simply held, and the communications ISR runs to completion.
When the communications ISR is complete (t 25), the previous processor state is restored, which
is the execution of the printer ISR. However, before even a single instruction in that routine can be
executed, the processor honors the higher priority disk interrupt and control transfers to the disk ISR.
Only when that routine is complete (t= 35) is the printer ISR resumed. When that routine completes
(t = 40), control finally returns to the user program.

Bus interconnection

A bus is a communication pathway connecting two or more devices. A key characteristic of a bus is
that it is a shared transmission medium. Multiple devices connect to the bus, and a signal transmitted
by any one device is available for reception by all other devices attached to the bus. If two devices
transmit during the same time period, their signals will overlap and become garbled. Thus, only one
device at a time can successfully transmit.

Typically, a bus consists of multiple communication pathways, or lines. Each line is capable of
transmitting signals representing binary 1 and binary 0. Over time, a sequence of binary digits can
be transmitted across a single line. Taken together, several lines of a bus can be used to transmit
binary digits simultaneously (in parallel). For example, an 8-bit unit of data can be transmitted
over eight bus lines. Computer systems contain a number of different buses that provide pathways
between components at various levels of the computer system hierarchy. A bus that connects major
computer components (processor, memory, I/O) is called a system bus. The most common computer
interconnection structures are based on the use of one or more system buses.

Bus Structure

On any bus the lines can be classified into three functional groups (Figure 3.5): data, address, and
control lines. In addition, there may be power distribution lines that supply power to the attached
modules.

62 Computer Organisation and Operating System


kn Unäv

The data lines provide a path for moving data among system modules. These lines, collectively, are
called the data bus. The data bus may consist of 32, 64, 128, or even more separate lines, the number
of lines being referred to as the width of the data bus. Because each line can carry only 1 bit at a time,
the number of lines determines how many bits can be transferred at a time. The width of the data bus
is a key factor in determining overall system performance. For example, if the data bus is 32 bits wide
and each instruction is 64 bits long, then the processor must access the memory module twice during
each instruction cycle.

Fig 3.5

The address lines are used to designate the source or destination of the data on the data bus. For
example, if the processor wishes to read a word (8, 16, or 32 bits) of data from memory, it puts the
address of the desired word on the address lines. Clearly, the width of the address bus determines the
maximum possible memory capacity of the system.
The control lines are used to control the access to and the use of the data and address lines. Because
the data and address lines are shared by all components, there must be a means of controlling their
use. Control signals transmit both command and timing information among system modules. Timing
signals indicate the validity of data and address information. Command signals specify operations to
be performed. Typical control lines include

• Memory writes: Causes data on the bus to be written into the addressed location
• memory read: Causes data from the addressed location to be placed on the bus
• I/O write: Causes data on the bus to be output to the addressed I/O port
• i/o read: Causes data from the addressed I/O port to be placed on the bus
• transfer acK: Indicates that data have been accepted from or placed on the bus
• Bus request: Indicates that a module needs to gain control of the bus
• Bus grant: Indicates that a requesting module has been granted control of the bus
• interrupt request: Indicates that an interrupt is pending

Computer Organisation and Operating System 63


kn Unäv

• interrupt acK: Acknowledges that the pending interrupt has been recognized
• clock: Is used to synchronize operations
• reset: Initializes all modules

Fig 3.6

Physically, the system bus is actually a number of parallel electrical conductors. In the classic bus
arrangement, these conductors are metal lines etched in a card or board (printed circuit board). The
bus extends across all of the system components, each of which taps into some or all of the bus lines.
The classic physical arrangement is depicted in Figure 3.6. In this example, the bus consists of two
vertical columns of conductors. At regular intervals along the columns, there are attachment points
in the form of slots that extend out horizontally to support a printed circuit board. Each of the major
system components occupies one or more boards and plugs into the bus at these slots. The entire
arrangement is housed in a chassis. This scheme can still be used for some of the buses associated
with a computer system. However, modern systems tend to have all of the major components on
the same board with more elements on the same chip as the processor. Thus, an on-chip bus may
connect the processor and cache memory, whereas an on-board bus may connect the processor to
main memory and other components.

This arrangement is most convenient. A small computer system may be acquired and then expanded
later (more memory, more I/O) by adding more boards. If a component on a board fails, that board
can easily be removed and replaced.

64 Computer Organisation and Operating System


kn Unäv

Bus width

Although a variety of different bus implementations exist, there are a few basic parameters or design
elements that serve to classify and differentiate buses. Below table lists key elements.

Bus Width: The width of the data bus has an impact on system performance: The wider the data bus,
the greater the number of bits transferred at one time. The width of the address bus has an impact on
system capacity: the wider the address bus, the greater the range of locations that can be referenced.

data transfer type

A bus supports various data transfer types, as illustrated in Figure 3.7. All buses support both write
(master to slave) and read (slave to master) transfers. In the case of a multiplexed address/data bus,
the bus is first used for specifying the address and then for transferring the data. For a read operation,
there is typically a wait while the data are being fetched from the slave to be put on the bus. For either
a read or a write, there may also be a delay if it is necessary to go through arbitration to gain control
of the bus for the remainder of the operation (i.e., seize the bus to request a read or write, then seize
the bus again to perform a read or write).

In the case of dedicated address and data buses, the address is put on the address bus and remains
there while the data are put on the data bus. For a write operation, the master puts the data onto the
data bus as soon as the address has stabilized and the slave has had the opportunity to recognize its
address. For a read operation, the slave puts the data onto the data bus as soon as it has recognized its
address and has fetched the data.

Computer Organisation and Operating System 65


kn Unäv

Fig 3.7
There are also several combination operations that some buses allow. A read–modify–write operation
is simply a read followed immediately by a write to the same address. The address is only broadcast
once at the beginning of the operation. The whole operation is typically indivisible to prevent any
access to the data element by other potential bus masters. The principal purpose of this capability is
to protect shared memory resources in a multiprogramming system

Read-after-write is an indivisible operation consisting of a write followed immediately by a read from


the same address. The read operation may be performed for checking purposes.

Some bus systems also support a block data transfer. In this case, one address cycle is followed by n
data cycles. The first data item is transferred to or from the specified address; the remaining data items
are transferred to or from subsequent addresses.

66 Computer Organisation and Operating System


kn Unäv

Multiple-Bus Hierarchies

If a great number of devices are connected to the bus, performance will suffer. There are two main
causes:

1. In general, the more devices attached to the bus, the greater the bus length and hence the greater
the propagation delay. This delay determines the time it takes for devices to coordinate the use of
the bus. When control of the bus passes from one device to another frequently, these propagation
delays can noticeably affect performance.

2. The bus may become a bottleneck as the aggregate data transfer demand approaches the capacity
of the bus. This problem can be countered to some extent by increasing the data rate that the bus
can carry and by using wider buses (e.g., increasing the data bus from 32 to 64 bits). However,
because the data rates generated by attached devices (e.g., graphics and video controllers, network
interfaces) are growing rapidly, this is a race that a single bus is ultimately destined to lose.

Fig 3.8a
Example Bus Configuration

Figure 3.8a shows some typical examples of I/O devices that might be attached to the expansion bus.
Network connections include local area networks (LANs) such as a 10-Mbps Ethernet and connections
to wide area networks (WANs) such as a packet-switching network. SCSI (small computer system
interface) is itself a type of bus used to support local disk drives and other peripherals. A serial port
could be used to support a printer or scanner.

Computer Organisation and Operating System 67


kn Unäv

This traditional bus architecture is reasonably efficient but begins to break down as higher and higher
performance is seen in the I/O devices. In response to these growing demands, a common approach
taken by industry is to build a highspeed bus that is closely integrated with the rest of the system,
requiring only a bridge between the processor’s bus and the high-speed bus. This arrangement is
sometimes known as a mezzanine architecture.

Fig 3.8b
Figure 3.8b shows a typical realization of this approach. Again, there is a local bus that connects the
processor to a cache controller, which is in turn connected to a system bus that supports main memory.
The cache controller is integrated into a bridge, or buffering device, that connects to the high-speed
bus. This bus supports connections to high-speed LANs, such as Fast Ethernet at 100 Mbps, video and
graphics workstation controllers, as well as interface controllers to local peripheral buses, including
SCSI and FireWire. The latter is a high-speed bus arrangement specifically designed to support high-
capacity I/O devices. Lower-speed devices are still supported off an expansion bus, with an interface
buffering traffic between the expansion bus and the high-speed bus.

The advantage of this arrangement is that the high-speed bus brings high demand devices into closer
integration with the processor and at the same time is independent of the processor. Thus, differences
in processor and high-speed bus speeds and signal line definitions are tolerated. Changes in processor
architecture do not affect the high-speed bus, and vice versa.

synchronous and asynchronous bus

timing: Timing refers to the way in which events are coordinated on the bus. Buses use either
synchronous timing or asynchronous timing.
68 Computer Organisation and Operating System
kn Unäv

With synchronous timing, the occurrence of events on the bus is determined by a clock. The bus
includes a clock line upon which a clock transmits a regular sequence of alternating 1s and 0s of
equal duration. A single 1–0 transmission is referred to as a clock cycle or bus cycle and defines a
time slot. All other devices on the bus can read the clock line, and all events start at the beginning of
a clock cycle. Figure 3.9 shows a typical, but simplified, timing diagram for synchronous read and
write operations. Other bus signals may change at the leading edge of the clock signal (with a slight
reaction delay). Most events occupy a single clock cycle. In this simple example, the processor places
a memory address on the address lines during the first clock cycle and may assert various status
lines. Once the address lines have stabilized, the processor issues an address enable signal. For a read
operation, the processor issues a read command at the start of the second cycle. A memory module
recognizes the address and, after a delay of one cycle, places the data on the data lines. The processor
reads the data from the data lines and drops the read signal. For a write operation, the processor puts
the data on the data lines at the start of the second cycle, and issues a write command after the data
lines have stabilized. The memory module copies the information from the data lines during the third
clock cycle.

Fig 3.9

Computer Organisation and Operating System 69


kn Unäv

With asynchronous timing, the occurrence of one event on a bus follows and depends on the
occurrence of a previous event. In the simple read example of Figure 3.10a, the processor places
address and status signals on the bus. After pausing for these signals to stabilize, it issues a read
command, indicating the presence of valid address and control signals. The appropriate memory
decodes the address and responds by placing the data on the data line. Once the data lines have
stabilized, the memory module asserts the acknowledged line to signal the processor that the data
are available. Once the master has read the data from the data lines, it deasserts the read signal. This
causes the memory module to drop the data and acknowledge lines. Finally, once the acknowledge
line is dropped, the master removes the address information.
Figure 3.10b shows a simple asynchronous write operation. In this case, the master places the data
on the data line at the same time that is puts signals on the status and address lines. The memory
module responds to the write command by copying the data from the data lines and then asserting
the acknowledge line. The master then drops the write signal and the memory module drops the
acknowledge signal.

Fig 3.10

PCI Bus

The peripheral component interconnect (PCI) is a popular high-bandwidth, processor-independent bus


that can function as a mezzanine or peripheral bus. Compared with other common bus specifications,
PCI delivers better system performance for high-speed I/O subsystems (e.g., graphic display adapters,
network interface controllers, disk controllers, and so on).The current standard allows the use of up
70 Computer Organisation and Operating System
kn Unäv

to 64 data lines at 66 MHz, for a raw transfer rate of 528 MByte/s, or 4.224 Gbps. But it is not just
a high speed that makes PCI attractive. PCI is specifically designed to meet economically the I/O
requirements of modern systems; it requires very few chips to implement and supports other buses
attached to the PCI bus.

Figure 3.11a shows a typical use of PCI in a single-processor system. A combined DRAM controller
and bridge to the PCI bus provides tight coupling with the processor and the ability to deliver data
at high speeds. The bridge acts as a data buffer so that the speed of the PCI bus may differ from
that of the processor’s I/O capability. In a multiprocessor system (Figure 3.11b), one or more PCI
configurations may be connected by bridges to the processor’s system bus. The system bus supports
only the processor/cache units, main memory, and the PCI bridges. Again, the use of bridges keeps
the PCI independent of the processor speed yet provides the ability to receive and deliver data rapidly.

Fig 3.11

Computer Organisation and Operating System 71


kn Unäv

SCSI Bus

A small computer systems interface (SCSI) is a standard interface for connecting peripheral devices
to a PC. Depending on the standard, generally it can connect up to 16 peripheral devices using a single
bus including one host adapter. SCSI is used to increase performance, deliver faster data transfer
transmission and provide larger expansion for devices such as CD-ROM drives, scanners, DVD
drives and CD writers. SCSI is also frequently used with RAID, servers, high-performance PCs and
storage area networks SCSI has a controller in charge of transferring data between the devices and
the SCSI bus. It is either embedded on the motherboard or a host adapter is inserted into an expansion
slot on the motherboard. The controller also contains SCSI basic input/output system, which is a
small chip providing the required software to access and control devices. Each device on a parallel
SCSI bus must be assigned a number between 0 and 7 on a narrow bus or 0 and 15 on a wider bus.
This number is called an SCSI ID. Newer serial SCSI IDs such as serialattached SCSI (SAS) use an
automatic process assigning a 7-bit number with the use of serial storage architecture initiators.

USB
Short for universal serial bus, USB (pronounced yoo-es-bee) is a plug and play interface that allows
a computer to communicate with peripheral and other devices. USB-connected devices cover a broad
range; anything from keyboards and mice to music players and flash drives. USB may also send
power to certain devices, such as powering smartphones and tablets and charging their batteries. The
first commercial release of the Universal Serial Bus (version 1.0) was in January 1996. This industry-
standard was then quickly adopted by Intel, Compaq, Microsoft, and other companies.

USB connector types

USB connectors come in different shapes and sizes. Most of the USB connectors, including the
standard USB, Mini-USB, and Micro-USB, have two or more variations of connectors. Further
information on each type is provided below.

Fig 3.12
72 Computer Organisation and Operating System
kn Unäv

direct memory access

DMA involves an additional module on the system bus. The DMA module (Figure 3.13) is capable
of mimicking the processor and, indeed, of taking over control of the system from the processor. It
needs to do this to transfer data to and from memory over the system bus. For this purpose, the DMA
module must use the bus only when the processor does not need it, or it must force the processor
to suspend operation temporarily. The latter technique is more common and is referred to as cycle
stealing, because the DMA module in effect steals a bus cycle.

When the processor wishes to read or write a block of data, it issues a command to the DMA
module, by sending to the DMA module the following information:

• Whether a read or write is requested, using the read or write control line between the processor
and the DMA module
• The address of the I/O device involved, communicated on the data lines
• The starting location in memory to read from or write to, communicated on the data lines and
stored by the DMA module in its address register
• The number of words to be read or written, again communicated via the data lines and stored
in the data count register

Fig 3.13
The processor then continues with other work. It has delegated this I/O operation to the DMA module.
The DMA module transfers the entire block of data, one word at a time, directly to or from memory,
without going through the processor. When the transfer is complete, the DMA module sends an
interrupt signal to the processor. Thus, the processor is involved only at the beginning and end of the
transfer.

Computer Organisation and Operating System 73


kn Unäv

There are three different modes of DMA data transfer which are as follows

Burst Mode − In burst mode, a whole block of data is shared in one contiguous sequence. Since the
DMA controller is allowed access to the system buses by the CPU, it sends all bytes of data in the data
block earlier yield control of the system buses back to the CPU. This mode is beneficial for loading
programs or data records into memory, but it does provide the CPU inactive for associatively long
periods.

cycle stealing mode − In cycle stealing mode, the DMA controller gets access to the system buses as
in burst mode, using the BR and BG signals. It can share one byte of information and then deasserts
BR, returning control of the system buses to the CPU. It already issues requests via BR, sharing one
byte of information per request, just before it has shared its whole block of data.

transparent mode − Transparent mode needed the most time to share a block of data, yet it is also
important in terms of whole system performance. In transparent mode, the DMA controller only
shares data when the CPU is implementing operations that do not use the system buses. For example,
the relatively simple CPU has multiple states that change or process data only within the CPU

references

1. Computer Organizationand Architecture Designing For Performance, Eighth Edition


(William Stallings)
2. https://www.computerhope.com

74 Computer Organisation and Operating System


kn Unäv

module iV
operating system

An operating system is a program that acts as an interface between the user and the computer hardware
and controls the execution of all kinds of programs.

Fig 4.1

functions of operating system

Following are some of important functions of an operating System.


• Memory Management
• Processor Management
• Device Management
• File Management
• Security
• Control over system performance
• Job accounting
• Error detecting aids
• Coordination between other software and users
Computer Organisation and Operating System 75
kn Unäv

memory management

Memory management refers to management of Primary Memory or Main Memory. Main memory is
a large array of words or bytes where each word or byte has its own address. Main memory provides
a fast storage that can be access directly by the CPU. So for a program to be executed, it must in the
main memory. Operating System does the following activities for memory management.
• Keeps tracks of primary memory i.e. what part of it are in use by whom, what part are not in
use.
• In multiprogramming, OS decides which process will get memory when and how much.
• Allocates the memory when the process requests it to do so.
• De-allocates the memory when the process no longer needs it or has been terminated.

processor management

In multiprogramming environment, OS decides which process gets the processor when and how
much time. This function is called process scheduling. Operating System does the following activities
for processor management.
• Keeps tracks of processor and status of process. Program responsible for this task is known
as traffic controller.
• Allocates the processor (CPU) to a process.
• De-allocates processor when processor is no longer required.

device management

OS manages device communication via their respective drivers. Operating System does the
following activities for device management.
• Keeps tracks of all devices. Program responsible for this task is known as the I/O controller.
• Decides which process gets the device when and for how much time.
• Allocates the device in the efficient way.
• De-allocates devices.

file management

A file system is normally organized into directories for easy navigation and usage. These directories
may contain files and other directions. Operating System does the following activities for file
management.
76 Computer Organisation and Operating System
kn Unäv

• Keeps track of information, location, uses, status etc. The collective facilities are often known
as file system.
• Decides who gets the resources.
• Allocates the resources.
• De-allocates the resources.

other important activities

Following are some of the important activities that Operating System does.

• security -- By means of password and similar other techniques, preventing unauthorized


access to programs and data.
• control over system performance -- Recording delays between request for a service and
response from the system.
• Job accounting -- Keeping track of time and resources used by various jobs and users.
• error detecting aids -- Production of dumps, traces, error messages and other debugging and
error detecting aids.
• Coordination between other software and users -- Coordination and assignment of
compilers, interpreters, assemblers and other software to the various users of the computer
systems.

type of operating system

Types of Operating System (OS)


Following are the popular types of OS (Operating System):
• Batch Operating System
• Multitasking/Time Sharing OS
• Multiprocessing OS
• Real Time OS
• Distributed OS
• Network OS
• Mobile OS
Batch Operating System
Some computer processes are very lengthy and time-consuming. To speed the same process, a job
with a similar type of needs is batched together and run as a group. It allowed only one program at a
Computer Organisation and Operating System 77
kn Unäv

time. This OS is responsible for scheduling the jobs according to priority and the resource required.
The user of a batch operating system never directly interacts with the computer. In this type of OS,
every user prepares his or her job on an offline device like a punch card and submit it to the computer
operator.

multi-tasking/time-sharing operating systems

Time-sharing operating system enables people located at a different terminal(shell) to use a single
computer system at the same time. The processor time (CPU) which is shared among multiple users
is termed as time sharing. Faster switching between multiple jobs to make processing faster. Allows
multiple users to share computer system simultaneously. The users can interact with each job while
it is running.

multiprocessor operating systems

Multiprocessor operating systems are also known as parallel OS or tightly coupled OS. Such operating
systems have more than one processor in close communication that sharing the computer bus, the
clock and sometimes memory and peripheral devices. It executes multiple jobs at same time and
makes the processing faster.

multiprocessor systems have three main advantages:

• increased throughput: By increasing the number of processors, the system performs more
work in less time.
• economy of scale: Multiprocessor systems can save more money than multiple single-
processor systems, because they can share peripherals, mass storage, and power supplies.
• increased reliability: If one processor fails to done its task, then each of the remaining
processors must pick up a share of the work of the failed processor. The failure of one processor
will not halt the system, only slow it down.
real time operating system
A real time operating system time interval to process and respond to inputs is very small. Examples:
Military Software Systems, Space Software Systems are the Real time OS example.
distributed operating system
Distributed systems use many processors located in different machines to provide very fast
computation to its users. In distributed system, the different machines are connected in a network and
each machine has its own processor and own local memory. In this system, the operating systems on
all the machines work together to manage the collective network resource.

78 Computer Organisation and Operating System


kn Unäv

advantages of distributed systems


• Resources Sharing
• Computation speed up – load sharing
• Reliability
• Communications
• Requires networking infrastructure.
• Local area networks (LAN) or Wide area networks (WAN)

Network Operating System


Network Operating System runs on a server. It provides the capability to serve to manage data, user,
groups, security, application, and other networking functions.
mobile operating system
Mobile operating systems are those OS which is especially that are designed to power smartphones,
tablets, and wearables devices.
Some most famous mobile operating systems are Android and iOS, but others include BlackBerry,
Web, and watch OS.

operating system services

program execution: The system must be able to load a program into memory and to run that program.
The program must be able to end its execution, either normally or abnormally (indicating error).

1/0 operations: A running program may require I/O. This 1/0 may involve a file or an I/O device. For
specific devices, special functions may be desired (Such as to rewind a tape drive, or to blank a CRT
screen). For efficiency and protection, users usually cannot control 1/0 devices directly. Therefore, the
operating system must provide a means to do I/O.

file-system manipulation: The file system is of particular interest. Obviously, programs need to read
and write files. Programs also need to create and delete files by name.

communications: In many circumstances, one process needs to exchange information with another
process. Such communication can occur in two major ways. The first takes place between processes
that are executing on the same computer; the second takes place between processes that are executing
on different computer systems that are tied together by a computer network. Communications may
be implemented via shared memory, or by the technique of message passing, in which packets of
information are moved between processes by the operating system.

Computer Organisation and Operating System 79


kn Unäv

error detection: The operating system constantly needs to be aware of possible errors. Errors may
occur in the CPU and memory hardware (such as a memory error or a power failure), in I/O devices
(such as a parity error on tape, a connection failure on a network, or lack of paper in the printer),
and in the user program (such as an arithmetic overflow, an attempt to access an illegal memory
location, or a too-great use of CPU time). For each type of error, the operating system should take the
appropriate action to ensure correct and consistent computing.

Following are the three services provided by operating systems for ensuring the efficient operation
of the system itself.

1. resource allocation

When multiple users are logged on the system or multiple jobs are running at the same time, resources
must be allocated to each of them. Many different types of resources are managed by the operating
system.

2. accounting

The operating systems keep track of which users use how many and which kinds of computer
resources. This record keeping may be used for accounting (so that users can be billed) or simply for
accumulating usage statistics.

3. protection

When several disjointed processes execute concurrently, it should not be possible for one process
to interfere with the others, or with the operating system itself. Protection involves ensuring that all
access to system resources is controlled. Security of the system from outsiders is also important.
Such security starts with each user having to authenticate him to the system, usually by means of a
password, to be allowed access to the resources.

operating system properties

Batch processing

Batch processing is when a computer processes a number of tasks that it has collected in a group.
It is designed to be a completely automated process, without human intervention. Batch processing
began early on in the origin of computers. Batches of punch cards, with computer programming
instructions, would be processed at one time. The batch would run until it was completed, or an error
occurred, whereupon it would stop and manual intervention would be required.

80 Computer Organisation and Operating System


kn Unäv

Before the 1970s people used to have a single computer known as the mainframe. It was not possible
to directly feed the program to the computer. People used to give jobs to a computer operator in the
form of punch cards. Then computer operator used to make the batches of all these punch cards of
similar requirements to save the setup time. The main task of the batch operating system is to make
the batches of similar types of jobs and send these batches to the CPU for execution.

Fig 4.2
The batch operating system grouped jobs that perform similar functions. These job groups are treated
as a batch and executed simultaneously. A computer system with this operating system performs the
following batch processing activities:
1. A job is a single unit that consists of a preset sequence of commands, data, and programs.
2. Processing takes place in the order in which they are received, i.e., first come, first serve.
3. These jobs are stored in memory and executed without the need for manual information.
4. When a job is successfully run, the operating system releases its memory.

multitasking
Multitasking term used in a modern computer system. It is a logical extension of a multiprogramming
system that enables the execution of multiple programs simultaneously. In an operating system,
multitasking allows a user to perform more than one computer task simultaneously. Multiple tasks
are also known as processes that share similar processing resources like a cpu. The operating system
keeps track of where you are in each of these jobs and allows you to transition between them without
losing data.
Early operating system could execute various programs at the same time, although multitasking was
not fully supported. As a result, a single software could consume the entire CPU of the computer while
completing a certain activity. Basic operating system functions, such as file copying, prevented the
user from completing other tasks, such as opening and closing windows. Fortunately, because modern
operating systems have complete multitasking capability, numerous programs can run concurrently
without interfering with one other. In addition, many operating system processes can run at the same
time.

Computer Organisation and Operating System 81


kn Unäv

Fig 4.3

multiprogramming

multiprogramming means a computing environment in which a number of users can run multiple
programs on a single-CPU computer at the same time. To improve the overall performance of the
computer system, developers introduced the concept of multiprogramming, so that several jobs could
be kept in memory at one time. The CPU is switched back and forth among them to increase CPU
utilization and to decrease the total time needed to execute the jobs.

The objective of multiprogramming is to have some process running at all times, in order to maximize
CPU utilization. In a uniprocessor system, only one process may run at a time; any other processes
must wait until the CPU is free and can be rescheduled.

The idea of multiprogramming is relatively simple. A process is executed until it must wait, typically
for the completion of some I/O request. In a simple computer system, the CPU would then sit idle;
all this waiting time is wasted. With multiprogramming, we try to use this time productively. Several
processes are kept in memory at one time. When one process has to wait, the operating system takes
the CPU away from that process and gives the CPU to another process. This pattern continues.

Fig 4.4

82 Computer Organisation and Operating System


kn Unäv

spooling

A spool is a buffer that holds output for a device, such as a printer, that cannot accept interleaved data
streams. Although a printer can serve only one job at a time, several applications may wish to print
their output concurrently, without having their output mixed together. The operating system solves
this problem by intercepting all output to the printer. Each application's output is spooled to a separate
disk file. When an application finishes printing, the spooling system queues the corresponding spool
file for output to the printer. The spooling system copies the queued spool files to the printer one at a
time. In some operating systems, spooling is managed by a system daemon process. In other operating
systems, it is handled by an in-kernel thread. In either case, the operating system provides a control
interface that enables users and system administrators to display the queue, to remove unwanted jobs
before those jobs print, to suspend printing while the printer is serviced, and so on.

Fig 4.5

process concept
the process

A process is a program in execution. A process is more than the program code, which is sometimes
known as the text section. It also includes the current activity, as represented by the value of the
program counter and the contents of the processor's registers. In addition, a process generally includes
the process stack, which contains temporary data (such as method parameters, return addresses, and
local variables), and a data section, which contains global variables.

We emphasize that a program by itself is not a process; a program is a passive entity, such as the
contents of a file stored on disk, whereas a process is an active entity, with a program counter
specifying the next instruction to execute and a set of associated resources.
Computer Organisation and Operating System 83
kn Unäv

process state

As a process executes, it changes state. The state of a process is defined in part by the current activity
of that process. Each process may be in one of the following states:

• New: The process is being created.


• Running: Instructions are being executed.
• Waiting: The process is waiting for some event to occur (such as an I/O
• completion or reception of a signal).
• Ready: The process is waiting to be assigned to a processor.
• Terminated: The process has finished execution

Fig 4.6

These state names are arbitrary, and they vary across operating systems. The states that they represent
are found on all systems, however. Certain operating systems more finely delineate process states.
Only one process can be running on any processor at any instant, although many processes may be
ready and waiting. The state diagram corresponding to these states is presented in Figure 4.6

84 Computer Organisation and Operating System


kn Unäv

Process Control Block


Each process is represented in the operating system by a process control block (PCB)-also called a
task control block. A PCB is shown in above Figure 4.7.

Fig 4.7

It contains many pieces of information associated with a specific process, including these

process state: The state may be new, ready, running, waiting, halted, and SO on.

program counter: The counter indicates the address of the next instruction to be executed for this
process.

cpu registers: The registers vary in number and type, depending on the computer architecture.
They include accumulators, index registers, stack pointers, and general-purpose registers, plus any
condition-code information. Along with the program counter, this state information must be saved
when an interrupt occurs, to allow the process to be continued correctly afterward (figure 4.8).

cpu-scheduling information: This information includes a process priority, pointers to scheduling


queues, and any other scheduling parameters.

memory-management information: This information may include such information as the value of
the base and limit registers, the page tables, or the segment tables, depending on the memory system
used by the operating system.

Computer Organisation and Operating System 85


kn Unäv

Fig 4.8
Diagram showing CPU switch from process to process.

accounting information: This information includes the amount of CPU and real time used, time
limits, account numbers, job or process numbers, and so on.

status information: The information includes the list of I/O devices allocated to this process, a list
of open files, and so on. The PCB simply serves as the repository for any information that may vary
from process to process.

process scheduling

The objective of multiprogramming is to have some process running at all times, so as to maximize
CPU utilization. The objective of time-sharing is to switch the CPU among processes so frequently
that users can interact with each program while it is running. A uniprocessor system can have only
one running process. If more processes exist, the rest must wait until the CPU is free and can be
rescheduled.

scheduling Queues

As processes enter the system, they are put into a job queue. This queue consists of all processes in the
system. The processes that are residing in main memory and are ready and waiting to execute are kept
on a list called the ready queue. This queue is generally stored as a linked list. A ready-queue header
contains pointers to the first and final PCBs in the list. We extend each PCB to include a pointer field

86 Computer Organisation and Operating System


kn Unäv

that points to the next PCB in the ready queue. The operating system also has other queues. When a
process is allocated the CPU, it executes for a while and eventually quits, is interrupted, or waits for
the occurrence of a particular event, such as the completion of an I/O request. In the case of an 1/0
request, such a request may be to a dedicated tape drive, or to a shared device, such as a disk. Since
the system has many processes, the disk may be busy with the I/O request of some other process.
The process therefore may have to wait for the disk. The list of processes waiting for a particular I/O
device is called a device queue. Each device has its own device queue (Figure).

Fig 4.9

A common representation of process scheduling is a queueing diagram, such as that in below Figure.
Each rectangular box represents a queue. Two types of queues are present: the ready queue and a set
of device queues. The circles represent the resources that serve the queues, and the arrows indicate the
flow of processes in the system. A new process is initially put in the ready queue. It waits in the ready
queue until it is selected for execution (or dispatched). Once the process is assigned to the CPU and
is executing, one of several events could occur:

Fig 4.10
Computer Organisation and Operating System 87
kn Unäv

• The process could issue an I/O request, and then be placed in an I/O queue.
• The process could create a new subprocess and wait for its termination.
• The process could be removed forcibly from the CPU, as a result of an interrupt, and be put
back in the ready queue.

In the first two cases, the process eventually switches from the waiting state to the ready state, and
is then put back in the ready queue. A process continues this cycle until it terminates, at which time
it is removed from all queues and has its PCB and resources deallocated.

schedulers

a process migrates between the various scheduling queues throughout its lifetime. The operating
system must select, for scheduling purposes, processes from these queues in some fashion. The
selection process is carried out by the appropriate scheduler.
In a batch system, often more processes are submitted than can be executed immediately. These
processes are spooled to a mass-storage device (typically a disk), where they are kept for later
execution. The long-term scheduler, or job scheduler, selects processes from this pool and loads them
into memory for execution. The short-term scheduler, or cpu scheduler, selects from among the
processes that are ready to execute, and allocates the CPU to one of them.

The primary distinction between these two schedulers is the frequency of their execution. The short-
term scheduler must select a new process for the CPU frequently. A process may execute for only a
few milliseconds before waiting for an I/O request. Often, the short-term scheduler executes at least
once every 100 milliseconds. Because of the brief time between executions, the short-term scheduler
must be fast. If it takes 10 milliseconds to decide to execute a process for 100 milliseconds, then 10/
(100 + 10) = 9 percent of the CPU is being used (or wasted) simply for scheduling the work.

The long-term scheduler, on the other hand, executes much less frequently. There may be minutes
between the creation of new processes in the system. The long-term scheduler controls the degree of
multiprogramming-the number of processes in memory. If the degree of multiprogramming is stable,
then the average rate of process creation must be equal to the average departure rate of processes
leaving the system. Thus, the long-term scheduler may need to be invoked only when a process leaves
the system. Because of the longer interval between executions, the long-term scheduler can afford to
take more time to select a process for execution.

The long-term scheduler must make a careful selection. In general, most processes can be described
as either I/O bound or CPU bound. An 110-bound process spends more of its time doing I/O than
it spends doing computations. A CPU-bound process, on the other hand, generates I/O requests
infrequently, using more of its time doing computation than an I/O-bound process uses. The long-

88 Computer Organisation and Operating System


kn Unäv

term scheduler should select a good process mix of I/O-bound and CPU-bound processes. If all
processes are I/O bound, the ready queue will almost always be empty, and the short-term scheduler
will have little to do. If all processes are CPU bound, the I/O waiting queue will almost always be
empty, devices will go unused, and again the system will be unbalanced. The system with the best
performance will have a combination of CPU-bound and I/O-bound processes.

On some systems, the long-term scheduler may be absent or minimal. For example, time-sharing
systems such as UNIX often have no long-term scheduler, but simply put every new process in
memory for the short-term scheduler. The stability of these systems depends either on a physical
limitation (such as the number of available terminals) or on the self-adjusting nature of human users.
If the performance declines to unacceptable levels, some users will simply quit.

Some operating systems, such as time-sharing systems, may introduce an additional, intermediate
level of scheduling. This medium-term scheduler, diagrammed in Figure 4.11, removes processes from
memory (and from active contention for the CPU), and thus reduces the degree of multiprogramming. At
some later time, the process can be reintroduced into memory and its execution can be continued where
it left off. This scheme is called swapping. The process is swapped out, and is later swapped in, by the
medium-term scheduler. Swapping may be necessary to improve the process mix, or because a change in
memory requirements has overcommitted available memory, requiring memory to be freed up.

Figure 4.11
Addition of medium-term scheduling to the queueing diagram.

scheduling algorithms

CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to
be allocated the CPU. In this section, we describe several of the many CPU-scheduling algorithms
that exist.

Computer Organisation and Operating System 89


kn Unäv

First-Come, First-Served Scheduling

By far the simplest CPU-scheduling algorithm is the first-come, first-served (fcfs) scheduling
algorithm. With this scheme, the process that requests the CPU first is allocated the CPU first. The
implementation of the FCFS policy is easily managed with a FIFO queue. When a process enters the
ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the
process at the head of the queue. The running process is then removed from the queue. The code for
FCFS scheduling is simple to write and understand.

The average waiting time under the FCFS policy, however, is often quite long. Consider the following
set of processes that arrive at time 0, with the length of the CPU-burst time given in milliseconds:

If the processes arrive in the order PI, P2, P3, and are served in FCFS order, we get the result shown
in the following Gantt chart:

Fig 4.12

The waiting time is 0 milliseconds for process PI, 24 milliseconds for process PZ, and 27 milliseconds
for process P3. Thus, the average waiting time is (0 + 24 + 27)/3 = 17 milliseconds. If the processes
arrive in the order P2, P3, Pl, however, the results will be as shown in the following Gantt chart:

Fig 4.13

The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds. This reduction is substantial. Thus,
the average waiting time under a FCFS policy is generally not minimal, and may vary substantially
if the process CPU-burst times vary greatly.

90 Computer Organisation and Operating System


kn Unäv

shortest-Job-first scheduling

A different approach to CPU scheduling is the shortest-job-first (sJf) scheduling algorithm. This
algorithm associates with each process the length of the latter's next CPU burst. When the CPU is
available, it is assigned to the process that has the smallest next CPU burst. If two processes have the
same length next CPU burst, FCFS scheduling is used to break the tie. Note that a more appropriate
term would be the shortest next CPU burst, because the scheduling is done by examining the length
of the next CPU burst of a process, rather than its total length. We use the term SJF because most
people and textbooks refer to this type of scheduling discipline as SJF.

As an example, consider the following set of processes, with the length of the CPU-burst time given
in milliseconds:

Using SJF scheduling, we would schedule these processes according to the following Gantt chart:

Fig 4.14
The waiting time is 3 milliseconds for process PI, 16 milliseconds for process P2,9 milliseconds for
process PS, and 0 milliseconds for process Pq. Thus, the average waiting time is (3 + 16 + 9 + 0)/4 =
7 milliseconds. If we were using the FCFS scheduling scheme, then the average waiting time would
be 10.25 milliseconds.

The SJF scheduling algorithm is provably optimal, in that it gives the minimum average waiting time
for a given set of processes. By moving a short process before a long one, the waiting time of the
short process decreases more than it increases the waiting time of the long process. Consequently, the
average waiting time decreases.

priority scheduling

The SJF algorithm is a special case of the general priority-scheduling algorithm. A priority is associated
with each process, and the CPU is allocated to the process with the highest priority. Equal-priority
processes are scheduled in FCFS order.
Computer Organisation and Operating System 91
kn Unäv

An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of the (predicted)
next CPU burst. The larger the CPU burst, the lower the priority, and vice versa.
Note that we discuss scheduling in terms of high priority and low priority. Priorities are generally
some fixed range of numbers, such as 0 to 7, or 0 to 4,095. However, there is no general agreement on
whether 0 is the highest or lowest priority. Some systems use low numbers to represent low priority;
others use low numbers for high priority. This difference can lead to confusion. In this text, we use
low numbers to represent high priority.
As an example, consider the following set of processes, assumed to have arrived at time 0, in the order
PI, P2, ..., Ps, with the length of the CPU-burst time given in milliseconds:

Fig 4.15

The average waiting time is 8.2 milliseconds.

Priorities can be defined either internally or externally. Internally defined priorities use some
measurable quantity or quantities to compute the priority of a process. For example, time limits,
memory requirements, the number of open files, and the ratio of average I/O burst to average CPU
burst have been used in computing priorities. External priorities are set by criteria that are external to
the operating system, such as the importance of the process, the type and amount of funds being paid
for computer use, the department sponsoring the work, and other, often political, factors.

round-robin scheduling

The round-robin (rr) scheduling algorithm is designed especially for timesharing systems. It is
similar to FCFS scheduling, but preemption is added to switch between processes. A small unit of
time, called a time quantum (or time slice), is defined. A time quantum is generally from 10 to 100
milliseconds. The ready queue is treated as a circular queue. The CPU scheduler goes around the
ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum.

92 Computer Organisation and Operating System


kn Unäv

To implement RR scheduling, we keep the ready queue as a FIFO queue of processes. New processes
are added to the tail of the ready queue. The CPU scheduler picks the first process from the ready
queue, sets a timer to interrupt after 1 time quantum, and dispatches the process.

One of two things will then happen. The process may have a CPU burst of less than 1 time quantum.
In this case, the process itself will release the CPU voluntarily. The scheduler will then proceed to
the next process in the ready queue. Otherwise, if the CPU burst of the currently running process is
longer than 1 time quantum, the timer will go off and will cause an interrupt to the operating system.
A context switch will be executed, and the process will be put at the tail of the ready queue. The CPU
scheduler will then select the next process in the ready queue.

The average waiting time under the RR policy, however, is often quite long. Consider the following
set of processes that arrive at time 0, with the length of the CPU-burst time given in milliseconds:

If we use a time quantum of 4 milliseconds, then process P1 gets the first 4 milliseconds. Since
it requires another 20 milliseconds, it is preempted after the first-time quantum, and the CPU is
given to the next process in the queue, process P2. Since process P2 does not need 4 milliseconds, it
quits before its time quantum expires. The CPU is then given to the next process, process P3. Once
each process has received 1 time quantum, the CPU is returned to process P1 for an additional time
quantum. The resulting RR schedule is

Fig 4.16

The average waiting time is 17/3 = 5.66 milliseconds.

In the RR scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in a
row. If a process' CPU burst exceeds 1 time quantum, that process is preempted and is put back in the
ready queue. The RR scheduling algorithm is preemptive.

Computer Organisation and Operating System 93


kn Unäv

If there are n processes in the ready queue and the time quantum is q, then each process gets l/n of the
CPU time in chunks of at most q time units. Each process must wait no longer than (n - 1) x q time
units until its next time quantum. For example, if there are five processes, with a time quantum of 20
milliseconds, then each process will get up to 20 milliseconds every 100 milliseconds.

What is thread in os?


Thread is a sequential flow of tasks within a process. Threads in OS can be of the same or different
types. Threads are used to increase the performance of the applications.
Each thread has its own program counter, stack, and set of registers. But the threads of a single
process might share the same code and data/file. Threads are also termed as lightweight processes
as they share common resources.
Components of Thread
A thread has the following three components:
1. Program Counter
2. Register Set
3. Stack space
Why do we need Threads?
Threads in the operating system provide multiple benefits and improve the overall performance of the
system. Some of the reasons threads are needed in the operating system are:
• Since threads use the same data and code, the operational cost between threads is low.
• Creating and terminating a thread is faster compared to creating or terminating a process.
• Context switching is faster in threads compared to processes.
Why Multithreading?
In Multithreading, the idea is to divide a single process into multiple threads instead of creating a
whole new process. Multithreading is done to achieve parallelism and to improve the performance
of the applications as it is faster in many ways which were discussed above. The other advantages of
multithreading are mentioned below.
• resource sharing: Threads of a single process share the same resources such as code, data/
file.
• responsiveness: Program responsiveness enables a program to run even if part of the program
is blocked or executing a lengthy operation. Thus, increasing the responsiveness to the user.
• economy: It is more economical to use threads as they share the resources of a single process.
On the other hand, creating processes is expensive.

94 Computer Organisation and Operating System


kn Unäv

process vs thread
Process simply means any program in execution while the thread is a segment of a process. The main
differences between process and thread are mentioned below:
process thread
Processes use more resources and hence they Threads share resources and hence they are termed
are termed as heavyweight processes. as lightweight processes.
Creation and termination times of processes Creation and termination times of threads are
are slower. faster compared to processes.
Processes have their own code and data/file. Threads share code and data/file within a process.
Communication between processes is slower. Communication between threads is faster.
Context Switching in processes is slower. Context switching in threads is faster.
Threads, on the other hand, are interdependent.
Processes are independent of each other. (i.e they can read, write or change another thread’s
data)
Eg: Opening two different browsers. Eg: Opening two tabs in the same browser.

Fig 4.17
types of thread

1. user Level thread:


User-level threads are implemented and managed by the user and the kernel is not aware of it.
• User-level threads are implemented using user-level libraries and the OS does not recognize
these threads.
• User-level thread is faster to create and manage compared to kernel-level thread.

Computer Organisation and Operating System 95


kn Unäv

• Context switching in user-level threads is faster.


• If one user-level thread performs a blocking operation then the entire process gets blocked.
Eg: POSIX threads, Java threads, etc.
2. Kernel level thread:
Kernel level threads are implemented and managed by the OS.
• Kernel level threads are implemented using system calls and Kernel level threads are
recognized by the OS.
• Kernel-level threads are slower to create and manage compared to user-level threads.
• Context switching in a kernel-level thread is slower.
• Even if one kernel-level thread performs a blocking operation, it does not affect other threads.
Eg: Window Solaris.

Fig 4.18
Advantages of Threading
• Threads improve the overall performance of a program.
• Threads increases the responsiveness of the program
• Context Switching time in threads is faster.
• Threads share the same memory and resources within a process.
• Communication is faster in threads.
• Threads provide concurrency within a process.
• Enhanced throughput of the system.
• Since different threads can run parallelly, threading enables the utilization of the multiprocessor
architecture to a greater extent and increases efficiency.

96 Computer Organisation and Operating System


kn Unäv

deadlock
In a multiprogramming environment, several processes may compete for a finite number of resources.
a process requests resources; if the resources are not available at that time, the process enters a wait
state. Waiting processes may never again change state, because the resources they have requested are
held by other waiting processes. This situation is called a deadlock.
a deadlock situation can arise if the following four conditions hold simultaneously in a system:

1. mutual exclusion: At least one resource must be held in a nonsharable mode; that is, only one
process at a time can use the resource. If another process requests that resource, the requesting process
must be delayed until the resource has been released.

2. Hold and wait: A process must be holding at least one resource and waiting to acquire additional
resources that are currently being held by other processes.

3. no preemption: Resources cannot be preempted; that is, a resource can be released only voluntarily
by the process holding it, after that process has completed its task.

4. Circular wait: A set {Po, PI, ..., P,) of waiting processes must exist such that Po is waiting for a
resource that is held by PI, PI is waiting for a resource that is held by P2, ..., PnPl is waiting for a
resource that is held by P, and P, is waiting for a resource that is held by Po.

deadlock prevention

mutual exclusion

The mutual-exclusion condition must hold for nonsharable resources. For example, a printer cannot
be simultaneously shared by several processes. Sharable resources, on the other hand, do not require
mutually exclusive access, and thus cannot be involved in a deadlock. Read-only files are a good
example of a sharable resource. If several processes attempt to open a read-only file at the same
time, they can be granted simultaneous access to the file. A process never needs to wait for a sharable
resource. In general, however, we cannot prevent deadlocks by denying the mutual exclusion
condition: Some resources are intrinsically nonsharable.
Hold and Wait
To ensure that the hold-and-wait condition never occurs in the system, we must guarantee that,
whenever a process requests a resource, it does not hold any other resources. One protocol that can
be used requires each process to request and be allocated all its resources before it begins execution.
We can implement this provision by requiring that system calls requesting resources for a process
precede all other system calls.

Computer Organisation and Operating System 97


kn Unäv

An alternative protocol allows a process to request resources only when the process has none. A
process may request some resources and use them. Before it can request any additional resources,
however, it must release all the resources that it is currently allocated.

To illustrate the difference between these two protocols, we consider a process that copies data from a
tape drive to a disk file, sorts the disk file, and then prints the results to a printer. If all resources must
be requested at the beginning of the process, then the process must initially request the tape drive,
disk file, and printer. It will hold the printer for its entire execution, even though it needs the printer
only at the end.

The second method allows the process to request initially only the tape drive and disk file. It copies
from the tape drive to the disk, then releases both the tape drive and the disk file. The process must
then again request the disk file and the printer. After copying the disk file to the printer, it releases
these two resources and terminates.

These protocols have two main disadvantages. First, resource utilization may be low, since many of
the resources may be allocated but unused for a long period. In the example given, for instance, we
can release the tape drive and disk file, and then again request the disk file and printer, only if we can
be sure that our data will remain on the disk file. If we cannot be assured that they will, then we must
request all resources at the beginning for both protocols.

Second, starvation is possible. A process that needs several popular resources may have to wait
indefinitely, because at least one of the resources that it needs is always allocated to some other
process.

no preemption

The third necessary condition is that there be no preemption of resources that have already been
allocated. To ensure that this condition does not hold, we can use the following protocol. If a process
is holding some resources and requests another resource that cannot be immediately allocated to it
(that is, the process must wait), then all resources currently being held are preempted. In other words,
these resources are implicitly released. The preempted resources are added to the list of resources for
which the process is waiting. The process will be restarted only when it can regain its old resources,
as well as the new ones that it is requesting.
Alternatively, if a process requests some resources, we first check whether they are available. If
they are, we allocate them. If they are not available, we check whether they are allocated to some
other process that is waiting for additional resources. If so, we preempt the desired resources from
the waiting process and allocate them to the requesting process. If the resources are not I either
available or held by a waiting process, the requesting process must wait While it is waiting, some of
its resources may be preempted, but only if another process requests them. A process can be restarted

98 Computer Organisation and Operating System


kn Unäv

only when it is allocated the new resources it is requesting and recovers any resources that were pre-
empted while it was waiting. This protocol is often applied to resources whose state can be easily
saved and restored later, such as CPU registers and memory space. It cannot generally be applied to
such resources as printers and tape drives.

circular Wait

The fourth and final condition for deadlocks is the circular-wait condition. One way to ensure that
this condition never holds is to impose a total ordering of all resource types, and to require that each
process requests resources in an increasing order of enumeration. Let R = {XI, R2, ..., R,) be the
set of resource types. We assign to each resource type a unique integer number, which allows us to
compare two resources and to determine whether one precedes another in our ordering. Formally,
we define a one-to-one function F: R + N, where N is the set of natural numbers. For example, if the
set of resource types R includes tape drives, disk drives, and printers, then the function F might be
defined as follows:

F(tape drive) = 1,
F(disk drive) = 5,
f(printer) = 12.

We can now consider the following protocol to prevent deadlocks: Each process can request resources
only in an increasing order of enumeration. That is, a process can initially request any number of
instances of a resource type, say Xi. After that, the process can request instances of resource type
Ri if and only if F(Rj) > F(Ri). If several instances of the same resource type are needed, a single
request for all of them must be issued. For example, using the function defined previously, a process
that wants to use the tape drive and printer at the same time must first request the tape drive and then
request the printer.

Alternatively, we can require that, whenever a process requests an instance of resource type Rj,
it has released any resources Ri such that F(Ri) 2 F(Rj). If these two protocols are used, then the
circular-wait condition cannot hold. We can demonstrate this fact by assuming that a circular wait
exists (proof by contradiction). Let the set of processes involved in the circular wait be {Po, PI, ...,
P,), where Pi is waiting for a resource Xi, which is held by process Pi+l. (Modulo arithmetic is used
on the indexes, so that P, is waiting for a resource R, held by Po.) Then, since process Pi+l is holding
resource Ri while requesting resource Ri+l, we must have F(Ri) < F(Ri+1), for all i. But this condition
means that F(Ro) < F(R1) < ... < F(R,) < F(Ro). By transitivity, F(Ro) < F(Ro), which is impossible.
Therefore, there can be no circular wait.
Note that the function F should be defined according to the normal order of usage of the resources in a
system. For example, since the tape drive is usually needed before the printer, it would be reasonable
to define F(tape drive) < F(printer).

Computer Organisation and Operating System 99


kn Unäv

deadlock avoidance

Deadlock-prevention algorithms, as discussed in above, prevent deadlocks by restraining how requests


can be made. The restraints ensure that at least one of the necessary conditions for deadlock cannot
occur, and, hence, that deadlocks cannot hold. Possible side effects of preventing deadlocks by this
method, however, are low device utilization and reduced system throughput.

An alternative method for avoiding deadlocks is to require additional information about how resources
are to be requested. For example, in a system with one tape drive and one printer, we might be told
that process P will request first the tape drive, and later the printer, before releasing both resources.
Process Q, on the other hand, will request first the printer, and then the tape drive. With this knowledge
of the complete sequence of requests and releases for each process, we can decide for each request
whether or not the process should wait. Each request requires that the system consider the resources
currently available, the resources currently allocated to each process, and the future requests and
releases of each process, to decide whether the current request can be satisfied or must wait to avoid
a possible future deadlock.

The various algorithms differ in the amount and type of information required. The simplest and most
useful model requires that each process declare the maximum number of resources of each type that it
may need. Given a priori information about the maximum number of resources of each type that may
be requested for each process, it is possible to construct an algorithm that ensures that the system will
never enter a deadlock state. This algorithm defines the deadlock-avoidance approach. A deadlock-
avoidance algorithm dynamically examines the resource-allocation state to ensure that a circularwait
condition can never exist. The resource-allocation state is defined by the number of available and
allocated resources, and the maximum demands of the processes.

safe state

A state is safe if the system can allocate resources to each process (up to its maximum) in some
order and still avoid a deadlock. More formally, a system is in a safe state only if there exists a safe
sequence. A sequence of processes <PI, P2, ..., P,> is a safe sequence for the current allocation state if,
for each Pi, the resources that Pi can still request can be satisfied by the currently available resources
plus the resources held by all the Pi, with j < i. In this situation, if the resources that process Pi needs
are not immediately available, then Pi can wait until all Pi have finished. When they have finished, Pi
can obtain all of its needed resources, complete its designated task, return its allocated resources, and
terminate. When Pi terminates, Pi+l can obtain its needed resources, and so on. If no such sequence
exists, then the system state is said to be unsafe.

100 Computer Organisation and Operating System


kn Unäv

Figure 4.19 Safe, unsafe, and deadlock state spaces.

A safe state is not a deadlock state. Conversely, a deadlock state is an unsafe state. Not all unsafe
states are deadlocks, however (Figure 4.19). An unsafe state may lead to a deadlock. As long as the
state is safe, the operating system can avoid unsafe (and deadlock) states. In an unsafe state, the
operating system cannot prevent processes from requesting resources such that a deadlock occurs:
The behaviour of the processes controls unsafe states.

To illustrate, we consider a system with 12 magnetic tape drives and 3 processes: Po, PI, and P2.
Process Po requires 10 tape drives, process PI may need as many as 4, and process P2 may need up
to 9 tape drives. Suppose that, at time to, process Po is holding 5 tape drives, process PI is holding 2,
and process PZ is holding 2 tape drives. (Thus, there are 3 free tape drives.)

At time to, the system is in a safe state. The sequence <PI, PO, P2> satisfies the safety condition, since
process PI can immediately be allocated all its tape drives and then return them (the system will then
have 5 available tape drives), then process Po can get all its tape drives and return them (the system
will then have 10 available tape drives), and finally process P2 could get all its tape drives and return
them (the system will then have all 12 tape drives available).

A system may go from a safe state to an unsafe state. Suppose that, at time tl, process Pp requests and
is allocated 1 more tape drive. The system is no longer in a safe state. At this point, only process PI
can be allocated all its tape drives. When it returns them, the system will have only 4 available tape
drives. Since process Po is allocated 5 tape drives, but has a maximum of 10, it may then request 5
more tape drives. Since they are unavailable, process Po must wait. Similarly, process P2 may request
an additional 6 tape drives and have to wait, resulting in a deadlock.

Computer Organisation and Operating System 101


kn Unäv

Our mistake was in granting the request from process P2 for 1 more tape drive. If we had made P2
wait until either of the other processes had finished and released its resources, then we could have
avoided the deadlock.

Given the concept of a safe state, we can define avoidance algorithms that ensure that the system will
never deadlock. The idea is simply to ensure that the system will always remain in a safe state. Initially,
the system is in a safe state. Whenever a process requests a resource that is currently available, the
system must decide whether the resource can be allocated immediately or whether the process must
wait. The request is granted only if the allocation leaves the system in a safe state.

resource-allocation graph algorithm

In addition to the request and assignment edges, we introduce a new type of edge, called a claim
edge. A claim edge Pi -+ Rj indicates that process Pi may request resource Rj at some time in the
future. This edge resembles a request edge in direction, but is represented by a dashed line. When
process Pi requests resource Rj, the claim edge Pi -+ Rj is converted to a request edge. Similarly,
when a resource Rj is released by Pi,th e assignment edge Rj -+ Pi is reconverted to a claim edge Pi -+
Xi. We note that the resources must be claimed a priori in the system. That is, before process Pi starts
executing, all its claim edges must already appear in the resource-allocation graph. We can relax this
condition by allowing a claim edge Pi + Rj to be added to the graph only if all the edges associated
with process Pi are claim edges.

Suppose that process Pi requests resource Rj. The request can be granted only if converting the
request edge Pi -+ Rj to an assignment edge Rj + Pi does not result in the formation of a cycle in the
resource-allocation graph. Note that we check for safety by using a cycle-detection algorithm. An
algorithm for detecting a cycle in this graph requires an order of n2 operations, where n is the number
of processes in the system.

Figure 4.20 Resource-allocation graph for deadlock avoidance.

102 Computer Organisation and Operating System


kn Unäv

If no cycle exists, then the allocation of the resource will leave the system in a safe state. If a cycle
is found, then the allocation will put the system in an unsafe state. Therefore, process Pi will have to
wait for its requests to be satisfied.

To illustrate this algorithm, we consider the resource-allocation graph of Figure 8.5. Suppose that P2
requests R2. Although R2 is currently free, we cannot allocate it to P2, since this action will create a
cycle in the graph (Figure 4.20). A cycle indicates that the system is in an unsafe state. If PI requests
R2, and P2 requests R1, then a deadlock will occur.

Banker's Algorithm

The resource-allocation graph algorithm is not applicable to a resource allocation system with multiple
instances of each resource type. The deadlock-avoidance algorithm that we describe next is applicable
to such a system, but is less efficient than the resource-allocation graph scheme. This algorithm is
commonly known as the banker's algorithm. The name was chosen because this algorithm could be
used in a banking system to ensure that the bank never allocates its available cash such that it can no
longer satisfy the needs of all its customers.

When a new process enters the system, it must declare the maximum number of instances of each
resource type that it may need. This number may not exceed the total number of resources in the
system. When a user requests a set of resources, the system must determine whether the allocation of
these resources will leave the system in a safe state. If it will, the resources are allocated; otherwise,
the process must wait until some other process releases enough resources.

Several data structures must be maintained to implement the banker's algorithm. These data structures
encode the state of the resource-allocation system. Let n be the number of processes in the system and
m be the number of resource types. We need the following data structures:

Available: A vector of length rn indicates the number of available resources of each type. If Available[j]
= k, there are k instances of resource type Rj available.

Max: An n x m matrix defines the maximum demand of each process. If Max[i,j] = k, then process Pi
may request at most k instances of resource type Ri.

Allocation: An n x m matrix defines the number of resources of each type currently allocated to each
process. If Allocation[i,j] = k, then process Pi is currently allocated k instances of resource type Rj.

Computer Organisation and Operating System 103


kn Unäv

Need: An n x m matrix indicates the remaining resource need of each process. If Need[i,j] = k, then
process Pi may need k more instances of resource type Ri to complete its task. Note that Need[i,j] =
Max[i,j] - Allocafion[i,j].

These data structures vary over time in both size and value. To simplify the presentation of the banker's
algorithm, let us establish some notation. Let X and Y be vectors of length n. We say that X 5 Y if and
only if X[i] 5 Y[i] for all i = 1, 2, ..., n. For example, if X = (1,7,3,2) and Y = (0,3,2,1), thenY 5 X. Y
< Xif Y IXandY$X. We can treat each row in the matrices Allocation and Need as vectors and refer to
them as Allocationi and Needi, respectively. The vector Allocation; specifies the resources currently
allocated to process Pi; the vector Need, specifies the additional resources that process Pi may still
request to complete its task.

Kernel i/o subsystem

Kernels provide many services related to I/O. Several services-scheduling, buffering, caching,
spooling, device reservation, and error handling-are provided by the kernel's I/O subsystem and build
on the hardware and device driver infrastructure.

Buffering

A buffer is a memory area that stores data while they are transferred between two devices or between
a device and an application. Buffering is done for three reasons. One reason is to cope with a speed
mismatch between the producer and consumer of a data stream.

A second use of buffering is to adapt between devices that have different data-transfer sizes. Such
disparities are especially common in computer networking, where buffers are used widely for
fragmentation and reassembly of messages. At the sending side, a large message is fragmented into
small network packets. The packets are sent over the network, and the receiving side places them in
a reassembly buffer to form an image of the source data.

A third use of buffering is to support copy semantics for application I/O. An example will clarify the
meaning of "copy semantics." Suppose that an application has a buffer of data that it wishes to write
to disk. It calls the write (1 system call, providing a pointer to the buffer and an integer specifying the
number of bytes to write. After the system call returns, what happens if the application changes the
contents of the buffer? With copy semantics, the version of the data written to disk is guaranteed to
be the version at the time of the application system call, independent of any subsequent changes in
the application's buffer. A simple way that the operating system can guarantee copy semantics is for
the write (1 system call to copy the application data into a kernel buffer before returning control to
the application. The disk write is performed from the kernel buffer, so that subsequent changes to the
application buffer have no effect. Copying of data between kernel buffers and application data space
is common in operating systems, despite the overhead that this operation introduces, because of the
clean semantics. The same effect can be obtained more efficiently by clever use of virtual-memory
mapping and copy- on-write page protection.

104 Computer Organisation and Operating System


kn Unäv

caching

A cache is a region of fast memory that holds copies of data. Access to the cached copy is more
efficient than access to the original. For instance, the instructions of the currently running process are
stored on disk, cached in physical memory, and copied again in the CPU's secondary and primary
caches. The difference between a buffer and a cache is that a buffer may hold the only existing copy of
a data item, whereas a cache, by definition, just holds a copy on faster storage of an item that resides
elsewhere.

spooling and device reservation

A spool is a buffer that holds output for a device, such as a printer, that cannot accept interleaved data
streams. Although a printer can serve only one job at a time, several applications may wish to print
their output concurrently, without having their output mixed together. The operating system solves
this problem by intercepting all output to the printer. Each application's output is spooled to a separate
disk file. When an application finishes printing, the spooling system queues the corresponding spool
file for output to the printer. The spooling system copies the queued spool files to the printer one at a
time. In some operating systems, spooling is managed by a system daemon process. In other operating
systems, it is handled by an in-kernel thread. In either case, the operating system provides a control
interface that enables users and system administrators to display the queue, to remove unwanted jobs
before those jobs print, to suspend printing while the printer is serviced, and so on.

error Handling

An operating system that uses protected memory can guard against many kinds of hardware and
application errors, so that a complete system failure is not the usual result of each minor mechanical
glitch. Devices and 1/0 transfers can fail in many ways, either for transient reasons, such as a network
becoming overloaded, or for "permanent" reasons, such as a disk controller becoming defective.
Operating systems can often compensate effectively for transient failures. For instance, a disk read0
failure results in a read0 retry, and a network send 0 error results in a resend 0, if the protocol so
specifies. Unfortunately, if an important component experiences a permanent failure, the operating
system is unlikely to recover.

references

1. Operating system concepts, Silberschatz, Galvin, Gagne.


2. www. https://www.scaler.com

Computer Organisation and Operating System 105


kn Unäv

module V
Linux operating system

Compare Open-source Software and Proprietary Software


open source vs.
open source proprietary
proprietary
Proprietary software requires companies
Open source software is free
to purchase a license. While companies
to use and modify. For web
Open Source vs. sell some proprietary software as a one-
technologies, you still need to pay
Proprietary Cost time purchase, most businesses use a
for web hosting or set up your
subscription model known as Software as
own server.
a Service (SaaS).
Companies can publish open
Open Source
source software, but open source Proprietary software companies release
vs. Proprietary
is often community-maintained. regular updates with new features and
Updates and
Either way, you can expect bug fixes.
Maintenance
periodic updates and patches.
Proprietary software comes with
Open Source With open source software, you’re restrictions. You most likely won’t have
vs. Proprietary free to use or modify the code as access to modify the code. Companies
Flexibility needed. also place limits on the software, such as
the number of users or transactions.
When you run into issues with an
open source system, you’re stuck
Open Source Proprietary software companies normally
searching community forums and
vs. Proprietary have support teams to help customers
documentation for answers. You
Support troubleshoot issues.
also can hire a developer with
experience with the platform.

Linux operating system

An operating system can be described as an interface among the computer hardware and the user
of any computer. It is a group of software that handles the resources of the computer hardware and
facilitates basic services for computer programs.
An operating system is an essential component of system software within a computer system. The
primary aim of an operating system is to provide a platform where a user can run any program
conveniently or efficiently.

106 Computer Organisation and Operating System


kn Unäv

On the other hand, Linux OS is one of the famous versions of the UNIX OS. It is developed to provide
a low-cost or free OS for several personal computer system users. Remarkably, it is a complete OS
Including an X Window System, Emacs editor, ip/tcp, GUI (graphical user interface), etc.
architecture of Linux system
The Linux operating system's architecture mainly contains some of the components: the Kernel,
System Library, Hardware layer, System, and shell utility.

Fig 5.1
1. Kernel: - The kernel is one of the core sections of an operating system. It is responsible for each
of the major actions of the Linux OS. This operating system contains distinct types of modules and
cooperates with underlying hardware directly. The kernel facilitates required abstraction for hiding
details of low-level hardware or application programs to the system. There are some of the important
kernel types which are mentioned below:
• Monolithic Kernel
• Micro kernels
• Exo kernels
• Hybrid kernels
2. system Libraries: - These libraries can be specified as some special functions. These are applied
for implementing the operating system's functionality and don't need code access rights of the modules
of kernel.
3. system utility programs: - It is responsible for doing specialized level and individual activities.
4. Hardware layer:- Linux operating system contains a hardware layer that consists of several
peripheral devices like CPU, HDD, and RAM.
5. shell: - It is an interface among the kernel and user. It can afford the services of kernel. It can take
commands through the user and runs the functions of the kernel. The shell is available in distinct types
of OSes. These operating systems are categorized into two different types, which are the graphical
shells and command-line shells.

Computer Organisation and Operating System 107


kn Unäv

The graphical line shells facilitate the graphical user interface, while the command line shells facilitate
the command line interface. Thus, both of these shells implement operations. However, the graphical
user interface shells work slower as compared to the command-line interface shells.
There are a few types of these shells which are categorized as follows:
• Korn shell
• Bourne shell
• C shell
• POSIX shell

Features and Drawback of Linux OS


1. free and open-source
Linux is completely free of cost, and expenses are never a hindrance to using it as an operating system.
Linux is open-source. This means that modification of code, analysis of codes, redistribution of codes,
or selling copies of the enhanced codes can be done by anyone in the world provided they come under
the same license where the license also costs no charge.
Linux operating system is released under the GNU (General Public Licence) and is now one of the
largest open-source projects worldwide.
2. extremely flexible
Linux has incorporated itself into embedded products like watches, digital equipment and
supercomputing servers.
There are no prerequisites for installing an entire Linux suit. It allows a user to install only the
components that are required by the user.
3. Lightweight Infrastructure
Linux consumes lesser storage space, and its installation requires around 4GB to 8GB of disk space.
Memory footprint or the amount of memory (RAM) used by the software while running is also
less and it is compatible with all kinds of file formats like text files, audio files, video files, graphic
formats, etc.
4. graphical user interface (gui)
Even though Linux works on using the command line interface but it can be converted to be used like
windows having a Graphical user interface. This is done mostly by installing packages. The most
common way of having a GUI on the Linux environment is to log in to Ubuntu server and install its
desktop environment.
5. end-to-end encryption
Linux allows end-to-end encryption while accessing data thus storing public keys in the server. All
data is password protected and provides authentication to users. It also allows many security features
and provides file permissions, a secure shell, etc.

108 Computer Organisation and Operating System


kn Unäv

6. portable environment
Linux works on any kind of environment and doesn't depend on the device being high-ended or low-
ended. A large number of users can simultaneously use it anytime, any place, and on multiple devices.
It supports all kinds of hardware to work on.
Multiple distributions or enterprises are also supported by Linux.
Linux has its own repository for software that can be used to install the required packages.
7. shell/ command-line interface
The Linux command line interpreter is known as Shell that provides an interface between the user and
kernel which then executes programs known as commands.
Hence, Linux uses Command-line interface to carry out the execution of tasks which is comparatively
more efficient to execute and takes lesser time. It also takes lesser space in the memory,
8. Frequent New Updates
Linux operating system provides a wide range of accessible software updates that can be deployed
and used according to requirements.
They update more frequently thus giving users an option to choose the updates and install them as
per their need.
9. Hierarchical file system
Linux comes with a well-defined file structure where user files are arranged in a definite directory structure.
According to the type of files, the directories are categorised as Binary directories, configuration
directories, Data directories, memory directories, Usr(Unix System Resources), var(variable
directory) and non-standard directories.
10. multi-user and multi-programming
Linux allows multiple users to access the system resources at the same time and allows multiple
applications to run at the same time.

Linux distribution (operating system) name

A few popular names:


1.Redhat Enterprise Linux
2.Fedora Linux
3. Debian Linux
4. Suse Enterprise Linux
5.Ubuntu Linux

Computer Organisation and Operating System 109


kn Unäv

Linux file system


A Linux file system is a structured collection of files on a disk drive or a partition. A partition is a
segment of memory and contains some specific data. In our machine, there can be various partitions
of the memory. Generally, every partition contains a file system.
The general-purpose computer system needs to store data systematically so that we can easily access
the files in less time. It stores the data on hard disks (HDD) or some equivalent storage type. There
may be below reasons for maintaining the file system:
• Primarily the computer saves data to the RAM storage; it may lose the data if it gets turned
off. However, there is non-volatile RAM (Flash RAM and SSD) that is available to maintain
the data after the power interruption.
• Data storage is preferred on hard drives as compared to standard RAM as RAM costs more
than disk space. The hard disks costs are dropping gradually comparatively the RAM.
The Linux file system contains the following sections:
• The root directory (/)
• A specific data storage format (EXT3, EXT4, BTRFS, XFS and so on)
• A partition or logical volume having a particular file system.
What is the Linux File System?
Linux file system is generally a built-in layer of a Linux operating system used to handle the data
management of the storage. It helps to arrange the file on the disk storage. It manages the file name,
file size, creation date, and much more information about a file. If we have an unsupported file format
in our file system, we can download software to deal with it.
Linux file system structure
Linux file system has a hierarchal file structure as it contains a root directory and its subdirectories.
All other directories can be accessed from the root directory. A partition usually has only one file
system, but it may have more than one file system.
A file system is designed in a way so that it can manage and provide space for non-volatile storage
data. All file systems required a namespace that is a naming and organizational methodology. The
namespace defines the naming process, length of the file name, or a subset of characters that can
be used for the file name. It also defines the logical structure of files on a memory segment, such as
the use of directories for organizing the specific files. Once a namespace is described, a Metadata
description must be defined for that particular file.
The data structure needs to support a hierarchical directory structure; this structure is used to describe
the available and used disk space for a particular block. It also has the other details about the files such
as file size, date & time of creation, update, and last modified.
Also, it stores advanced information about the section of the disk, such as partitions and volumes.
The advanced data and the structures that it represents contain the information about the file system
stored on the drive; it is distinct and independent of the file system metadata.
Linux file system contains two-part file system software implementation architecture. Consider the
below image:
110 Computer Organisation and Operating System
kn Unäv

Fig 5.2
The file system requires an API (Application programming interface) to access the function calls to
interact with file system components like files and directories. API facilitates tasks such as creating,
deleting, and copying the files. It facilitates an algorithm that defines the arrangement of files on a file
system.
Linux file system features
In Linux, the file system creates a tree structure. All the files are arranged as a tree and its branches.
The topmost directory called the root (/) directory. All other directories in Linux can be accessed
from the root directory.
Some key features of Linux file system are as following:
• specifying paths: Linux does not use the backslash (\) to separate the components; it uses
forward slash (/) as an alternative. For example, as in Windows, the data may be stored in C:\
My Documents\ Work, whereas, in Linux, it would be stored in /home/ My Document/ Work.
• Partition, Directories, and Drives: Linux does not use drive letters to organize the drive
as Windows does. In Linux, we cannot tell whether we are addressing a partition, a network
device, or an "ordinary" directory and a Drive.
• case sensitivity: Linux file system is case sensitive. It distinguishes between lowercase and
uppercase file names. Such as, there is a difference between test.txt and Test.txt in Linux. This
rule is also applied for directories and Linux commands.
• file extensions: In Linux, a file may have the extension '.txt,' but it is not necessary that a
file should have a file extension. While working with Shell, it creates some problems for the
beginners to differentiate between files and directories. If we use the graphical file manager, it
symbolizes the files and folders.
• Hidden files: Linux distinguishes between standard files and hidden files, mostly the
configuration files are hidden in Linux OS. Usually, we don't need to access or read the hidden
files. The hidden files in Linux are represented by a dot (.) before the file name (e.g., .ignore).
To access the files, we need to change the view in the file manager or need to use a specific
command in the shell.

Computer Organisation and Operating System 111


kn Unäv

In Linux, files are grouped according to purpose. Ex: commands, data files, documentation etc. Parts
of a Unix directory structure is listed below. All directories are grouped under the root entry "/".

Fig 5.3
Linux shell types
The shell can be defined as a command interpreter within an operating system like Linux/GNU or
Unix. It is a program that runs other programs. The shell facilitates every user of the computer as
an interface to the Unix/GNU Linux system. Hence, the user can execute different tools/utilities or
commands with a few input data.
The shell sends the result to the user over the screen when it has completed running a program which
is the common output device. That's why it is known as "command interpreter".
The shell is not just a command interpreter. Also, the shell is a programming language with complete
constructs of a programming language such as functions, variables, loops, conditional execution, and
many others.
For this reason, GNU/Unix Linux Shell is stronger than the Windows shell.
graphical shells
These shells specify the manipulation of programs that are based on the graphical user interface (GUI)
by permitting for operations like moving, closing, resizing, and opening windows and switching
focus among windows as well. Ubuntu OS or Windows OS could be examined as a good example
that offers a graphical user interface to the user to interact with the program. Various users don't need
for typing in any command for all the actions.
command-line shell
Various shells could be accessed with the help of a command-line interface by users. A unique
program known as Command prompt in Windows or Terminal in macOS/Linux is offered for typing
in the human-understandable commands like "ls", "cat", etc and after that, it is being run. The result
is further shown to the user on the terminal.
Working on a command-line shell is a complicated for many beginners due to it is hard to remember
several commands. Command-line shell is very dominant and it permits users for storing commands
in a file and run them together. In this way, a repetitive action could be automated easily. Usually,
these files are known as Shell scripts in macOS/Linux systems and batch files in Windows.

112 Computer Organisation and Operating System


kn Unäv

There are various types of shells which are discussed as follows:


Bash Shell
In the bash shell, bash means Bourne Again Shell. It is a default shell over several distributions of
Linux today. It is a sh-compatible shell. It could be installed over Windows OS. It facilitates practical
improvements on sh for interactive and programming use which contains:
• Job Control
• Command-line editing
• Shell Aliases and Functions
• Unlimited size command history
• Integer arithmetic in a base from 2-64
csh/tcsh shell
Tcsh is an upgraded C shell. This shell can be used as a shell script command processor and
interactive login shell
Tcsh shell includes the following characteristics:
• C like syntax
• Filename completion and programmable word
• Command-line editor
• Job control
• Spelling correction
Zsh shell
Zsh shell is developed to be reciprocal and it combines various aspects of other GNU/Unix Linux shells
like ksh, tcsh, and bash. Also, the POSIX shell standard specifications were based on the Korn shell.
Also, it is a strong scripting language like other available shells. Some of its unique features are listed
as follows:
• Startup files
• Filename generation
• Login/Logout watching
• Concept index
• Closing comments
• Variable index
• Key index
• Function index and various others that we could find out within the man pages.
fish
Fish stands for "friendly interactive shell". It was produced in 2005. Fish shell was developed to be
fully user-friendly and interactive just like other shells. It contains some good features which are
mentioned below:

Computer Organisation and Operating System 113


kn Unäv

• Web-based configuration
• Man page completions
• Auto-suggestions
• Support for term256 terminal automation
• Completely scripted with clean scripts

essential Linux command


Linux directory commands

1. pwd Command
The pwd command is used to display the location of the current working directory.
syntax:
pwd
2. mkdir command
The mkdir command is used to create a new directory under any directory.
syntax:
mkdir <directory name>
3. rmdir command
The rmdir command is used to delete a directory.
syntax:
rmdir <directory name>
4. ls command
The ls command is used to display a list of content of a directory.
syntax:
ls
5. cd command
The cd command is used to change the current directory.
syntax:
cd <directory name>

114 Computer Organisation and Operating System


kn Unäv

Linux file commands


6. touch command
The touch command is used to create empty files. We can create multiple empty files by executing it
once.
syntax:
touch <file name>
touch <file1> <file2> ....
7. cat command
The cat command is a multi-purpose utility in the Linux system. It can be used to create a file, display
content of the file, copy the content of one file to another file, and more.
syntax:
cat [OPTION]... [FILE]..
8. rm command
The rm command is used to remove a file.
syntax:
rm <file name>
9. cp command
The cp command is used to copy a file or directory.
syntax:
To copy in the same directory:
cp <existing file name> <new file name>
10. mv command
The mv command is used to move a file or a directory form one location to another location.
syntax:
mv <file name> <directory path>
11. rename command
The rename command is used to rename files. It is useful for renaming a large group of files.
syntax:
rename 's/old-name/new-name/' files

Computer Organisation and Operating System 115


kn Unäv

Linux file content commands

12. head command


The head command is used to display the content of a file. It displays the first 10 lines of a file.
syntax:
head <file name>
13. tail command
The tail command is similar to the head command. The difference between both commands is that it
displays the last ten lines of the file content. It is useful for reading the error message.
syntax:
tail <file name>
14. tac command
The tac command is the reverse of cat command, as its name specified. It displays the file content in
reverse order (from the last line).
syntax:
tac <file name>
15. more command
The more command is quite similar to the cat command, as it is used to display the file content in the
same way that the cat command does. The only difference between both commands is that, in case of
larger files, the more command displays screenful output at a time.
In more command, the following keys are used to scroll the page:
enter key: To scroll down page by line.
space bar: To move to the next page.
b key: To move to the previous page.
/ key: To search the string.
syntax:
more <file name>
16. less command
The less command is similar to the more command. It also includes some extra features such as
'adjustment in width and height of the terminal.' Comparatively, the more command cuts the output
in the width of the terminal.
syntax:
less <file name>

116 Computer Organisation and Operating System


kn Unäv

Linux user commands

17. su command
The su command provides administrative access to another user. In other words, it allows access of
the Linux shell to another user.
syntax:
su <user name>
18. id command
The id command is used to display the user ID (UID) and group ID (GID).
syntax:
id
19. useradd command
The useradd command is used to add or remove a user on a Linux server.
syntax:
useradd username
20. passwd Command
The passwd command is used to create and change the password for a user.
syntax:
passwd <username>
21. groupadd command
The groupadd command is used to create a user group.
syntax:
groupadd <group name>

Linux filter commands


22. cat command
The cat command is also used as a filter. To filter a file, it is used inside pipes.
syntax:
cat <fileName> | cat or tac | cat or tac |. . .
23. cut command
The cut command is used to select a specific column of a file. The '-d' option is used as a delimiter,
and it can be a space (' '), a slash (/), a hyphen (-), or anything else. And, the '-f' option is used to
specify a column number.
syntax:
cut -d(delimiter) -f(columnNumber) <filename>
Computer Organisation and Operating System 117
kn Unäv

24. grep command


The grep is the most powerful and used filter in a Linux system. The 'grep' stands for "global regular
expression print." It is useful for searching the content from a file. Generally, it is used with the pipe.
syntax:
command | grep <searchWord>
25. comm command
The 'comm' command is used to compare two files or streams. By default, it displays three columns,
first displays non-matching items of the first file, second indicates the non-matching item of the
second file, and the third column displays the matching items of both files.
syntax:
comm <file1> <file2>
26. sed command
The sed command is also known as stream editor. It is used to edit files using a regular expression. It
does not permanently edit files; instead, the edited content remains only on display. It does not affect
the actual file.
syntax:
command | sed 's/<oldWord>/<newWord>/'
27. tee command
The tee command is quite similar to the cat command. The only difference between both filters is
that it puts standard input on standard output and also write them into a file.
syntax:
cat <fileName> | tee <newFile> | cat or tac |.....

Linux Networking Commands


28. ip command
Linux ip command is an updated version of the ipconfig command. It is used to assign an IP address,
initialize an interface, disable an interface.
syntax:
ip a or ip addr
29. ssh command
Linux ssh command is used to create a remote connection through the ssh protocol.
syntax:
ssh user_name@host(IP/Domain_name)</p>

118 Computer Organisation and Operating System


kn Unäv

30. mail command


The mail command is used to send emails from the command line.
syntax:
mail -s "Subject" <recipient address>
31. ping command
The ping command is used to check the connectivity between two nodes, that is whether the server
is connected. It is a short form of "Packet Internet Groper."
syntax:
ping <destination>
32. host command
The host command is used to display the IP address for a given domain name and vice versa. It
performs the DNS lookups for the DNS Query.
syntax:
host <domain name> or <ip address>

role of system administrator in Linux

Linux System Administrator is a person who has ‘root’ access that is ‘superuser’. It means he has
privilege to access everything which includes all user accounts, all system configurations, home
directories with all files therein, all files in system

Linux System Administrator has following duties (Write any five)

Installing and configuring server


• A server is basically a computer program that facilitate the same computer or other computer
by providing services to them.
• It is most important element of Modern OS and network design.
• It is of system administrator to configure server so that the most essential server remains
inaccessible. He must be aware of types of attack and security bugs.

Installing and configuring application software


• In order to ensure a correct execution environment, administrator must provide software
which is well configured and validate.
• He should ensure adequate memory allotment and resolve software failure and dependency
issues.

Computer Organisation and Operating System 119


kn Unäv

• He must provide a set of activities to control hardware and software configuration and maintain
policies for users.

creating and maintaining user accounts


• User can access his own account but administrator has access to every user account.
• He can add, modify, delete or copy user account.
• He is responsible for maintaining security by providing role on a user account that define the
level of access.

Backing up and restoring files


• To minimize the loss of data, administrator must maintain backup of files nd he should restore
it whenever required.
• Administrator can take backup in removable media such as hard drives or tapes as protection
against loss.
• Before creating backup, administrator must decide.

Configuring a secure system


• It is a duty of administrator to involve tasks and decisions to run secure Linux system and
maintaining data integrity.
• It provides strong protection to individuals and corporate bodies and protecting parts of system
even if it is under attack.
• Administrator should ensure
• System has firewall.
• Not allow connection from unknown network.
• Not install software if not needed.

file permissions
All the three owners (user owner, group, others) in the Linux system have three types of permissions
defined. Nine characters denotes the three types of permissions.
• read (r) : The read permission allows you to open and read the content of a file. But you can't
do any editing or modification in the file.
• Write (w) : The write permission allows you to edit, remove or rename a file. For instance, if
a file is present in a directory, and write permission is set on the file but not on the directory,
then you can edit the content of the file but can't remove, or rename it.
• execute (x): In Unix type system, you can't run or execute a program unless execute permission
is set.But in Windows, there is no such permission available.

120 Computer Organisation and Operating System


kn Unäv

Permissions are listed below:

permission on a file on a directory


r (read) read file content (cat) read directory content (ls)
w (write) change file content (vi) create file in directory (touch)
x (execute) execute the file enter the directory (cd)

permission set
File permissions for (-rw-rw-r--)

position characters ownership

1 - denotes file type

2-4 rw- permission for user

5-7 rw- permission for group

8-10 r-- permission for other

When you are the User owner, then the user owner permission applies to you. Other permissions are
not relevant to you.
When you are the group then the group permission applies to you. Other permissions are not relevant
to you.
When you are the Other, then the other permission applies to you. User and group permissions are
not relevant to you.

Wine on Linux

Basically, Wine is a sort of software, or we can specifically say that it is a type of open-source
software. It is available for everyone that means anyone can download it and use it without spending
a penny. It can be considered as one of the most important software’s that we may need, specifically,
if we're a Linux user, as it allows us to run windows applications on several Linux and UNIX based
operating systems (or other POSIX compliant operating systems).

In simple words, we can say that if we are using a Linux or Unix-based operating system and we
need to run some Windows applications for a certain task or just for entertainment. One can do so by
installing the Wine Software. Wine Software allows it users to run the various windows application

Computer Organisation and Operating System 121


kn Unäv

on the Linux based operating system without requiring any other additional software like a virtual
machine. All we need is to install the Wine software only because, With Wine's help, we never need
to install windows on wer system, which a time taking process (or using a virtual box).

It may be quite possible that many users may think it is also an emulator like several other emulators
available on the internet. But we are not right here because it cannot be considered as an emulator.
Instead, it instantly translates Windows API calls into POSIX calls and eliminates the performance
and memory penalty of other methods, and allows we to cleanly integrate Windows applications into
wer desktop

understanding of -

user management

User management is a system to handle activities related to individuals’ access to devices, software,
and services. It focuses on managing permissions for access and actions as well as monitoring usage.
Functions of user management include:
• Providing users with authenticated access
• Supporting set up, reissuing, and decommissioning of users’ access credentials
• Establishing access privileges based on permissions
User management also can keep track of accounts related to software licenses throughout their
lifecycle. This ensures that all users have licenses for the software that they are using and that these
can be reclaimed and reissued when they are no longer in use.

device management

device management enables organizations to administer and maintain devices, including virtual
machines, physical computers, mobile devices, and IoT devices. Device management is a critical
component of any organization's security strategy. It helps ensure that devices are secure, up-to-date,
and compliant with organizational policies, with the goal of protecting the corporate network and data
from unauthorized access.
As organizations support remote and hybrid workforces, it's more important than ever to have a solid
device management strategy. Organizations must protect and secure their resources and data on any
device.

122 Computer Organisation and Operating System


kn Unäv

data management

Data management is the process of ingesting, storing, organizing and maintaining the data created
and collected by an organization. Effective data management is a crucial piece of deploying the IT
systems that run business applications and provide analytical information to help drive operational
decision-making and strategic planning by corporate executives, business managers and other end
users.

The data management process includes a wide range of tasks and procedures, such as:
• Collecting, processing, validating, and storing data
• Integrating different types of data from disparate sources, including structured and unstructured
data
• Ensuring high data availability and disaster recovery
• Governing how data is used and accessed by people and apps
• Protecting and securing data and ensuring data privacy

references

1. https://www.scaler.com
2. https://www.javatpoint.com

Computer Organisation and Operating System 123

You might also like