Unit - 4 COA

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

UNIT -

Input Output O rganization

Input/Output Subsystem

The I/O subsystem of a computer provides an efficient mode of communication between the
central system and the outside environment. It handles all the input-outputoperations of the
computer system.

Peripheral Devices

Input or output devices that are connected to computer are called peripheral devices. These
devices are designed to read information into or out of the memory unit upon command from the
CPU and are considered to be the part of computer system. These devices are also called
peripherals.

For example: Keyboards, display units and printers are common peripheral devices.There are three

types of peripherals:

1. Input peripherals : Allows user input, from the outside world to the computer.
Example: Keyboard, Mouse etc.
2. Output peripherals: Allows information output, from the computer to the outside
world. Example: Printer, Monitor etc
3. Input-Output peripherals: Allows both input(from outised world to computer) as
well as, output(from computer to the outside world). Example: Touch screen etc.

Input-Output Interface

Input-output interface provides a method for transferring information between internal storage
and external I/ 0 devices. Peripherals connected to a computer need special communication
links for interfacing them with the central processingunit. The purpose of the communication
link is to resolve the differences that exist between the central computer and each peripheral.

The major differences are:

1. Peripherals are electromechanical and electromagnetic devices and their manner ofoperation
is different from the operation of the CPU and memory, which are electronic devices.
Therefore, a conversion of signal values may be required.

2. The data transfer rate of peripherals is usually slower than the transfer rate of theCPU, and
consequently, a synchronization mechanism may be needed.
3. Data codes and formats in peripherals differ from the word format in the CPU and memory.

4. The operating modes of peripherals are different from each other and each must be
controlled so as not to disturb the operation of other peripherals connected tothe CPU.

To resolve these differences, computer systems include special hardware components between the
CPU and peripherals to supervise and synchronize all input and output transfers. These
components are called interface units because they interface betweenthe processor bus and the
peripheral device.

I/O bus

The I/O bus from the processor is attached to all peripheral interfaces. To communicate with a
particular device, the processor place s a device address on the address lines. Each interface
attached to the I/O bus contains an address decoder that monitors the address lines. When the
interface detects its own address, it activatesthe path between the bus lines and the device
that it controls. All peripherals whose address doe s not corresponds to the address in the bus
are disabled by their interface.

At the same time that the address is made available In the address lines, the processor provides
a function code in the control lines. The interface selected responds to the function code and
proceeds to execute it. The function code is referred to as an I/O command and is in essence an
instruction that is executed in the interface and its attached peripheral unit. The interpretation of
the command depends on the peripheral that the processor is addressing. There are four types of
commands that an interface may receive. They are classified as control, status, data output, and
data input.
The I/O processor is sometimes called a data channel.
Modes of Transfer

Binary information received from an external device is usually stored in memory for later
processing. Information transferred from the central computer in to an external device originates
in the memory unit. The CPU merely executes the I/O instructions and may accept the data
temporarily, but the ultimate source or destination is the memory unit.
Data transfer between the central computers and I/O devices may be handled in a variety of
modes. Some modes use the CPU as an intermediate path; others transfer the data directly to and
from the memory unit. Data transfer to and from peripherals may be handled in one of three
possible modes:

1. Programmed I/O

2. Interrupt-initiated I/O

3. Direct memory access (DMA )

Programmed I/O

Programmed I/ O operations are the result of I/ O instructions written in the computer


program. Each data item transfer is initiated by an instruction in the program. Usually, the transfer
is to and from a CPU register and peripheral. Other instructions are needed to transfer the
data to and from CPU and memory. Transferring data under program control requires constant
monitoring of the peripheral by the CPU. Once a data transfer is initiated, the CPU is required to
monitor the interface to see when a transfer can again be made. It is up to the programmed
instructions executed in
the CPU to keep close tabs on everything that is taking place in the interface unit andthe I/ O
device.

In the programmed I/ O method, the CPU stays in a program loop until the I/ O unit
indicates that it is ready for data transfer. This is a time-consuming process since it keeps the
processor busy needlessly.

Example of Programmed I/ O

In the programmed I/O method, the I/O device does not have direct access to memory. A transfer
from an I/O device to memory requires the execution of several instructions by the CPU,
including an input instruction to transfer the data from the device to the CPU and a store
instruction to transfer the data from the CPU to memory. Other instructions may be needed to
verify that the data are available from the device and to count the numbers of words
transferred.

An example of data transfer from an I/O device through an interface in to the CPU is shown in
Fig A. The device transfers bytes of data one at a time as they are available. When a byte of data
is available, the device places it in the I/O bus and enables its datavalid line. The interface
accepts the byte in to its data register and enables the data accepted line. The interface sets a bit
in the status register that we will refer to as an F or "flag" bit. The device can now disable the
data valid line, but it will not transfer another byte until the data accepted line is disabled by the
interface.. A program is written for the computer to check the flag in the status register to
determine if a byte has been placed in the data register by the I/O device. This is done by
reading the status register in to a CPU register and checking the value of the flag bit. If the
flag is equal to 1, the CPU reads the data from the data register. The flag bit is then cleared to 0
by either the CPU or the interface, depending on how the interface circuits are designed. Once
the flag is cleared, the interface disables the data accepted line andthe device can then transfer the
next data byte.

A flowchart of the program that must be written for the CPU is shown in Fig. B. It is assumed
that the device is sending a sequence of bytes that must be stored in memory. The transfer of
each byte requires three instructions:
A

The programmed I/O method is particularly useful in small low-speed computers or in systems
that are dedicated to monitor a device continuously. The difference in information transfer rate
between the CPU and the I/O device makes this type of transfer in efficient. To see why this is in
efficient, consider a typical computer that canexecute the two instructions that read the status
register and check the flag in 1 µs.
Assume that the input device transfers its data at an average rate of 100 bytes per second. This is
equivalent to one byte every 10,000 µs.This means that the CPU willcheck the flag 10,00 0
times between each transfer. The CPU is wasting time while checking the flag instead of doing
some other useful processing task.
B
Free up CPU to do other work

CPU cannot be accessing the memory during a DMA transfer. However there are two factors
which in combination allow apparent parallel memory access by the CPU and the device
performing the DMA transfer:

– Memory might have multiple access channels, CPU will use one and DMA another

– The CPU takes multiple clock cycles to execute an instruction. Once it has fetched the
instruction, which takes maybe one or two cycles, it can often execute the entire instruction
without further memory access (unless it is an instruction which itself access memory, such as a
mov instruction with an indirect operand)

– The device performing the DMA transfer is significantly slower than the CPU speed, so the
CPU will not need to halt on every instruction but just occasionally when the DMAdevice is
accessing the memory (cycle stealing)

– CPU might also use the cache

In combination, these factors mean that the device performing the DMA transfer willhave little
impact on the CPU speed.
DMA Controller

The term DMA stands for direct memory access. The hardware device used for direct memory
access is called the DMA controller. DMA controller is a control unit, part of I/Odevice’s
interface circuit, which can transfer blocks of data between I/O devices and main memory with
minimal intervention from the processor.

DMA Controller Diagram in Computer Architecture

DMA controller provides an interface between the bus and the input-output devices. Although it
transfers data without intervention of processor, it is controlled by the processor. The processor
initiates the DMA controller by sending the starting address, Number of words in the data block
and direction of transfer of data .i.e. from I/O devices to the memory or from main memory to
I/O devices. More than one external device can be connected to the DMA controller.

DMA controller contains an address unit, for generating addresses and selecting I/O device for
transfer. It also contains the control unit and data count for keeping counts ofthe number of blocks
transferred and indicating the direction of transfer of data. When the transfer is completed, DMA
informs the processor by raising an interrupt. The typical block diagram of the DMA controller is
shown in the figure below.
Working of DMA Controller

DMA controller has to share the bus with the processor to make the data transfer. The device that
holds the bus at a given time is called bus master. When a transfer from I/Odevice to the memory
or vice verse has to be made, the processor stops the execution of the current program, increments
the program counter, moves data over stack then sends a DMA select signal to DMA controller
over the address bus.

If the DMA controller is free, it requests the control of bus from the processor by raising the bus
request signal. Processor grants the bus to the controller by raising the bus grant signal, now
DMA controller is the bus master. The processor initiates the DMA controller by sending the
memory addresses, number of blocks of data to be transferredand direction of data transfer. After
assigning the data transfer task to the DMA controller, instead of waiting ideally till completion
of data transfer, the processor resumes the execution of the program after retrieving instructions
from the stack.
DMA controller now has the full control of buses and can interact directly with memoryand I/O
devices independent of CPU. It makes the data transfer according to the controlinstructions
received by the processor. After completion of data transfer, it disables the bus request signal and
CPU disables the bus grant signal thereby moving control of buses to the CPU.

When an I/O device wants to initiate the transfer then it sends a DMA request signal tothe DMA
controller, for which the controller acknowledges if it is free. Then the controller requests the
processor for the bus, raising the bus request signal. After receiving the bus grant signal it
transfers the data from the device. For n channeled DMA controller n number of external devices
can be connected.

The DMA transfers the data in three modes which include the following.

a) Burst Mode: In this mode DMA handover the buses to CPU only after completion of whole
data transfer. Meanwhile, if the CPU requires the bus it has to stay ideal and wait for data
transfer.

b) Cycle Stealing Mode: In this mode, DMA gives control of buses to CPU after transfer of
every byte. It continuously issues a request for bus control, makes the transfer of one byte and
returns the bus. By this CPU doesn’t have to wait for a long time if it needs a bus for higher
priority task.

c) Transparent Mode: Here, DMA transfers data only when CPU is executing the
instruction which does not require the use of buses.
Types of Instructions

Instructions are divided into two categories: the non-privileged instructions and the privileged
instructions. A non-privileged instruction is an instruction that any application or user can execute.
A privileged instruction, on the other hand, is an instruction that can only be executed in kernel
mode. Instructions are divided in this manner because privileged instructions could harm the
kernel.

In any Operating System, it is necessary to have a Dual Mode Operation to ensure theprotection
and security of the System from unauthorized or errant users. This Dual Mode separates the User
Mode from the System Mode or Kernel Mode.

Privileged Instructions

The Instructions that can run only in Kernel Mode are called Privileged Instructions .Privileged

Instructions possess the following characteristics :

(i) If any attempt is made to execute a Privileged Instruction in User Mode, then it willnot be
executed and treated as an illegal instruction. The Hardware traps it in the Operating System.

(ii) Before transferring the control to any User Program, it is the responsibility of the
Operating System to ensure that the Timer is set to interrupt. Thus, if the timer interrupts
then the Operating System regains the control.
Thus, any instruction which can modify the contents of the Timer is PrivilegedInstruction.

(iii) Privileged Instructions are used by the Operating System in order to achieve correct
operation.

(iv) Various examples of Privileged Instructions include:

 I/O instructions and Halt instructions


 Turn off all Interrupts
 Set the Timer
 Context Switching
 Clear the Memory or Remove a process from the Memory
 Modify entries in the Device-status table
Non-Privileged Instructions

The Instructions that can run only in User Mode are called Non-Privileged Instructions .

Various examples of Non-Privileged Instructions include:

 Reading the status of Processor


 Reading the System Time
 Generate any Trap Instruction
 Sending the final printout of Printer

Also, it is important to note that in order to change the mode from Privileged to Non- Privileged,
we require a Non-privileged Instruction that does not generate any interrupt.

Traps

A trap, also known as a software interrupt, is an instruction that explicitly generates an exception
condition. The most common use of a trap is to enter supervisor mode. The entry into supervisor
mode must be controlled to maintain security—if the interface between user and supervisor mode
is improperly designed, a user program may be able to sneak code into the supervisor mode that
could be executed to perform harmful operations.

ASSIGNING INTERRUPTS

A system designer can decide which hardware peripheral can produce which interruptrequest. This
decision can be implemented in hardware or software (or both) and depends upon the embedded
system being used.

An interrupt controller connects multiple external interrupts to one of the two ARM interrupt
requests. Sophisticated controllers can be programmed to allow an externalinterrupt source to
cause either an IRQ or FIQ exception.

When it comes to assigning interrupts, system designers have adopted a standarddesign practice:

▪ Software Interrupts are normally reserved to call privileged operating system routines. For
example, an SWI instruction can be used to change a program running in user modeto a
privileged mode. For an SWI handler example, take a look at Chapter 11.
▪ Interrupt Requests are normally assigned for general-purpose interrupts. For example, a periodic
timer interrupt to force a context switch tends to be an IRQ exception. The IRQ exception has a
lower priority and higher interrupt latency (to be discussed in the next section) than the FIQ
exception.
▪ Fast Interrupt Requests are normally reserved for a single interrupt source that requiresa fast
response time—for example, direct memory access specifically used to move blocks of memory.
Thus, in an embedded operating system design, the FIQ exception is used for a specific
application, leaving the IRQ exception for more general operating system activities.

Software Interrupt Definition

A software interrupt, also called an exception, is an interrupt that is caused bysoftware,


usually by a program in user mode.

An interrupt is a signal to the kernel (i.e., the core of the operating system) that an
event has occurred, and this results in changes in the sequence of instructions that is
executed by the CPU (central processing unit). One of thetwo main types of interrupts,
a hardware interrupt, is a signal to the system from an event that has originated in
hardware, such as the pressing of a key on the keyboard, a movement of the mouse or
a progression in the system clock.

Software interrupts , events that cause them are requests by an application program for
certain services from the operating system or the termination of such programs. When
it receives a software interrupt signal, the CPU may temporarily switch control to an
interrupt handler routine, and the process (i.e.,a running instance of a program) in the
kernel that was suspended by the interrupt will be resumed after the interrupt has been
accommodated. Each type of software interrupt is associated with an interrupt handler,
which is a software routine that takes control when the interrupt occurs.

It is a non-privileged mode in which each process begins. Non-privileged means


that processes in this mode are prohibited from accessing those portions of
memory that have been allocated to other programs or to thekernel.

Another way in which software interrupts differ from hardware interrupts is that they
are not started immediately, but, rather, only at certain times; that is, they begin only
after a hardware interrupt or a system call has occurred. As isthe case with hardware
interrupts, the number of types of software interruptsis limited.
Difference between Hardware Interrupt and Software Interrupt

1. Hardware Interrupt :
Hardware Interrupt is caused by some hardware device such as request to start an I/O, a hardware
failure or something similar. Hardware interrupts were introduced as a wayto avoid wasting the
processor’s valuable time in polling loops, waiting for external events.

For example, when an I/O operation is completed such as reading some data into thecomputer
from a tape drive.

2. Software Interrupt :
Software Interrupt is invoked by the use of INT instruction. This event immediately stops
execution of the program and passes execution over to the INT handler. The INT handler is
usually a part of the operating system and determines the action to be taken. It occurs when an
application program terminates or requests certain services from theoperating system.

For example, output to the screen, execute file etc.

Difference between Hardware Interrupt and Software Interrupt :

SR.N
O Hardware Interrupt Software Interrupt
.

Hardware interrupt is an interrupt Software interrupt is the interrupt that is


1 generated from an external device or generated by any internal system of the
hardware. computer.

2 It do not increment the program


counter. It increment the program counter.

Hardware interrupt can be invoked


3 with some external device such as Software interrupt can be invoked with the
request to start an I/O or occurrence help of INT instruction. of a
hardware failure.

4 It has lowest priority than software


interrupts It has highest priority among all interrupts.

5 Hardware interrupt is triggered by Software interrupt is triggered by software


external hardware and is considered and considered one of the ways to
SR.N
O Hardware Interrupt Software Interrupt
.

one of the ways to communicate communicate with kernel or to trigger


with the outside peripherals, system calls, especially during error or
hardware. exception handling.

6 It is an asynchronous event. It is synchronous event.

Hardware interrupts can be


7 classified into two types they are: 1. Software interrupts can be classified intotwo
types they are: 1. Normal Interrupts.
Maskable Interrupt. 2. Non Maskable
2. Exception
Interrupt.
Keystroke depressions and mouse
All system calls are examples of software
8 movements are examples of
interrupts
hardware interrupt.

You might also like