Chapter 4: Device Management

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

CoSc-2042: Operating System

Chapter 4: Device Management

4.1 Introduction
The two main jobs of a computer are I/O and processing. The role of the operating system in
computer I/O is to manage and control I/O operations and I/O devices. Computers operate a
great many kinds of devices. Most fit into the general categories of storage devices (disks,
tapes), transmission devices (network cards, modems), and human-interface devices (screen,
keyboard, mouse).

4.2 Serial versus Parallel Information Transfer


Information flows through the computer in many ways. The CPU is the central point for most
information. When you start a program, the CPU instructs the storage device to load the
program into RAM. When you create data and print it, the CPU instructs the printer to output
the data.

Because of the different types of devices that send and receive information, two major types
of data transfers take place within a computer: parallel and serial (Figure 4.1).

Figure 4.1: Parallel data transfers move data 8 bits at a time, whereas serial data transfers
move 1 bit at a time.

Parallel cables that are too long can cause signal skew, allowing the parallel signals to
become "out of step" with each other.

4.2.1 Parallel Transfer


Parallel transfers use multiple “lanes” for data and programs, and in keeping with the 8 bits =
1 byte nature of computer information, most parallel transfers use multiples of 8. Parallel
transfers take place between the following devices:

 CPU and RAM


 CPU and interface cards
 LPT (printer) port and parallel printer
 SCSI port and SCSI devices
 ATA (Advanced Technology Attachment)/IDE (Integrated Development
Environment) host adapter and ATA/IDE drives
 RAM and interface cards (either via the CPU or directly with DMA)

Why are parallel transfers so popular?


 Multiple bits of information are sent at the same time.
 At identical clock speeds, parallel transfers are faster than serial transfers because
more data is being transferred.

However, parallel transfers also have problems:

By Dr. Manish Kumar Mishra Page 4- 1


CoSc-2042: Operating System

 Many wires or traces (wire-like connections on the motherboard or expansion cards)


are needed, leading to interference concerns and thick, expensive cables.
 Excessively long parallel cables or traces can cause data to arrive at different times.
This is referred to as signal skew (Figure 4.2).
 Differences in voltage between wires or traces can cause jitter.

Figure 4.2: Parallel cables that are too long can cause signal skew, allowing the parallel
signals to become “out of step” with each other.

4.2.2 Serial Transfers


A serial transfer uses a single “lane” in the computer for information transfers. This sounds
like a recipe for slowdowns, but it all depends on how fast the speed limit is on the “data
highway”.

The following ports and devices in the computer use serial transfers:
 Serial (also called RS-232 or COM) ports and devices
 USB (Universal Serial Bus) 1.1 and 2.0 ports and devices
 Modems (which can be internal devices or can connect to serial or USB ports)
 IEEE-1394 (FireWire, i.Link) ports and devices
 Serial ATA (SATA) host adapters and drives

Serial transfers have the following characteristics:


 One bit at a time is transferred to the device.
 Transmission speeds can vary greatly, depending on the sender and receiver.
 Very few connections are needed in the cable and ports (one transmit, one receive,
and a few control and ground wires).
 Cable lengths can be longer with serial devices. For example, an UltraDMA/66
ATA/IDE cable can be only 18 inches long for reliable data transmission, whereas a
Serial ATA cable can be almost twice as long.

Although RS-232 serial ports are slow, newer types of serial devices are as fast as or faster
than parallel devices. The extra speed is possible because serial transfers don’t have to worry
about interference or other problems caused by running so many data lines together.

4.3 Abstracting Device Differences


I/O Devices can be categorized into following category.
 Human readable: Human Readable devices are suitable for communicating with the
computer user. Examples are printers, video display terminals, keyboard etc.
 Machine readable: Machine Readable devices are suitable for communicating with
electronic equipment. Examples are disk and tape drives, sensors, controllers and
actuators.
 Communication: Communication devices are suitable for communicating with
remote devices. Examples are digital line drivers and modems.

By Dr. Manish Kumar Mishra Page 4- 2


CoSc-2042: Operating System

Following are the differences between I/O Devices


 Data rate: There may be differences of several orders of magnitude between the data
transfer rates.
 Application: Different devices have different use in the system.
 Complexity of control: A disk is much more complex whereas printer requires
simple control interface.
 Unit of transfer: Data may be transferred as a stream of bytes or characters or in
larger blocks.
 Data representation: Different data encoding schemes are used for different devices.
 Error conditions: The nature of errors differs widely from one device to another.

4.4 I/O Hardware


A device communicates with, a computer system by sending signals over a cable or even
through the air. The device communicates with the machine via a connection point (or port)-
for example, a serial port. If devices use a common set of wires, the connection is called a
bus. A bus is a set of wires and a rigidly defined protocol that specifies a set of messages that
can be sent on the wires. Buses are used widely in computer architecture.

When device A has a cable that plugs into device B, and device B has a cable that plugs into
device C, and device C plugs into a port on the computer, this arrangement is called a daisy
chain. A daisy chain usually operates as a bus.

A controller is a collection of electronics that can operate a port, a bus, or a device. Some
devices have their own built-in controllers. If you look at a disk drive, you will see a circuit
board attached to one side. This board is the disk controller.

Each device controller is in charge of a specific type of device. Depending on the controller,
there may be more than one attached device. For instance, seven or more devices can be
attached to the small computer-systems interface (SCSI) controller. A device controller
maintains some local buffer storage and a set of special-purpose registers. The device
controller is responsible for moving the data between the peripheral devices that it controls
and its local buffer storage. Typically, operating systems have a device driver for each device
controller. This device driver understands the device controller and presents a uniform
interface to the device to the rest of the operating system.

To start an I/O operation, the device driver loads the appropriate registers within the device
controller. The device controller, in turn, examines the contents of these registers to
determine what action to take (such as “read a character from the keyboard”) - The controller
starts the transfer of data from the device to its local buffer. Once the transfer of data is
complete, the device controller informs the device driver via an interrupt that it has finished
its operation. The device driver then returns control to the operating system.

Each controller has a few registers that are used for communicating with the CPU. By writing
into these registers, the operating system can command the device to deliver data, accept
data, switch itself on or off, or otherwise perform some action. By reading from these
registers, the operating system can learn what the device’s state is, whether it is prepared to
accept a new command, and so on. In addition to the control registers, many devices have a
data buffer that the operating system can read and write.

By Dr. Manish Kumar Mishra Page 4- 3


CoSc-2042: Operating System

An I/O port typically consists of four registers:


1. The data-in register is read by the host to get input.
2. The data-out register is written by the host to send output.
3. The status register contains bits that can be read by the host. These bits indicate
states, such as whether the current command has completed, whether a byte is
available to be read from the data-in register, and whether a device error has occurred.
4. The control register can be written by the host to start a command.

The data registers are typically 1 to 4 bytes in size.

4.4.1 Polling
The complete protocol for interaction between the host and a controller may be complicated,
but the basic handshaking notion is simple. We explain handshaking with an example. We
assume that 2 bits are used to coordinate the producer - consumer relationship between the
controller and the host. The controller indicates its state through the busy bit in the status
register. To set a bit means to write a 1 into the bit and to clear a bit means to write a 0 into it.
The controller sets the busy bit when it is busy working and clears the busy bit when it is
ready to accept the next command. The host signals its wishes via the command-ready bit in
the command register. The host sets the command-ready bit when a command is available for
the controller to execute. For this example, the host writes output through a port, coordinating
with the controller by handshaking as follows:

1. The host repeatedly reads the busy bit until that bit becomes clear.
2. The host sets the write, bit in the command register and writes a byte into the data-out
register.
3. The host sets the command-ready bit.
4. When the controller notices that the command-ready bit is set, it sets the busy bit.
5. The controller reads the command register and sees the write command. It reads the
data-out register to get the byte and does the I/O to the device.
6. The controller clears the command-ready bit, clears the error bit in the status register
to indicate that the device I/O succeeded, and clears the busy bit to indicate that it is
finished.

This loop is repeated for each byte. In step 1, the host is busy-waiting or polling: It is in a
loop. Polling is a process by which a host waits for controller response. It is a looping
process, reading the status register over and over until the busy bit of status register becomes
clear.

4.4.2 Direct Memory Access


For a device that does large transfers, such as a disk drive, it seems wasteful to use an
expensive general-purpose processor to watch status bits and to feed data into a controller
register one byte at a time-a process termed programmed I/O (PIO). Many computers avoid
burdening the main CPU with PIO by offloading some of this work to a special-purpose
processor called a Direct-Memory-Access (DMA) controller. To initiate a DMA transfer, the
host writes a DMA command block into memory. This block contains a pointer to the source
of a transfer, a pointer to the destination of the transfer, and a count of the number of bytes to
be transferred. The CPU writes the address of this command block to the DMA controller,
then goes on with other work. The DMA controller proceeds to operate the memory bus
directly, placing addresses on the bus to perform transfers without the help of the main CPU.
A simple DMA controller is a standard component in PCs.

By Dr. Manish Kumar Mishra Page 4- 4


CoSc-2042: Operating System

Handshaking between the DMA controller and the device controller is performed via a pair of
wires called DMA-request and DMA-acknowledge. The device controller places a signal on
the DMA-request wire when a word of data is available for transfer. This signal causes the
DMA controller to seize the memory bus, to place the desired address on the memory-address
wires, and to place a signal on the DMA-acknowledge wire. When the device controller
receives the DMA-acknowledge signal, it transfers the word of data to memory and removes
the DMA-request signal.

4.5 Buffering
A buffer is a memory area that stores data while they are transferred between two devices or
between a device and an application. The buffers are usually set up in the main memory.
Device drivers and the kernel both may access device buffers. Basically, the buffers absorb
mismatch in the data transfer rates of processor or memory on one side and device on the
other.

A key concern, and a major programming issue from the OS point of view, is the
management of buffers. One key issue in buffer management is buffer-size. How buffer-sizes
may be determined can be explained by using a simple analogy. Ideally, the buffer sizes
should be chosen in computer systems to allow for free flow of data, with neither the
producer (process) of data nor the consumer (process) of data is required to wait on the other
to make the data available. Next we shall look at various buffering strategies:

 Single buffer: The device first fills out a buffer. Next the device driver hands in its
control to the kernel to be emptied the buffer. Once the buffer has been used up, the
device fills it up again for input.
 Double buffer: In this case there are two buffers. The device fills up one of the two
buffers, say buffer-0. The device driver hands in buffer-0 to the kernel to be emptied
and the device starts filling up buffer-1 while kernel is using up buffer-0. The roles
are switched when the buffer-1 is filled up.
 Circular buffer: One can say that the double buffer is a circular queue of size two.
We can extend this notion to have several buffers in the circular queue. These buffers
are filled up in sequence. The kernel accesses the filled up buffers in the same
sequence as these are filled up. The buffers are organized as a circular queue data
structure.

Figure 4.3: Buffering schemes.

By Dr. Manish Kumar Mishra Page 4- 5


CoSc-2042: Operating System

Note that buffer management essentially requires managing a queue data structure. The most
general of these is the circular buffer. One has to manage the pointers for the queue head and
queue tail to determine if the buffer is full or empty. When not full or not empty the queue
data structure can get a data item from a producer or put a data item into the consumer. This
is achieved by carefully monitoring the head and tail pointers. A double buffer is a queue of
length two and a single buffer is a queue of length one. Before moving on, we would also like
to remark that buffer status of full or empty may be communicated amongst the processes as
an event.

4.6 Spooling
Acronym for Simultaneous Peripheral Operations On-line, spooling refers to putting jobs in a
buffer, a special area in memory or on a disk where a device can access them when it is
ready. Spooling is useful because devices access data at different rates. The buffer provides a
waiting station where data can rest while the slower device catches up.

The most common spooling application is print spooling. In print spooling, documents are
loaded into a buffer (usually an area on a disk), and then the printer pulls them off the buffer
at its own rate. Because the documents are in a buffer where they can be accessed by the
printer, you can perform other operations on the computer while the printing takes place in
the background. Spooling also lets you place a number of print jobs on a queue instead of
waiting for each one to finish before specifying the next one.

4.7 Device driver


Device driver is a program or routine developed for an I/O device. A device driver
implements I/O operations or behaviours on a specific class of devices. For example a system
supports one or a number of multiple brands of terminals, all slightly different terminals may
have a single terminal driver. In the layered structure of I/O system, device driver lies
between interrupt handler and device independent I/O software. The job of a device driver are
following:
 To accept request from the device independent software above it.
 To see to it that the request is executed.

How a device driver handles a request is as follows: Suppose a request comes to read a block
N. If the driver is idle at the time a request arrives, it starts carrying out the request
immediately. Otherwise, if the driver is already busy with some other request, it places the
new request in the queue of pending requests.

By Dr. Manish Kumar Mishra Page 4- 6

You might also like