Download as pdf or txt
Download as pdf or txt
You are on page 1of 63

UNIT 1

1.1 Introduction to Operating System


1.1.1 Definition of Operating System:
 An operating system is a program which manages all the
computer hardware’s.
 It provides the base for application program and acts as an
intermediary between a user and the computer hardware

The operating system has two objectives such as:

 Firstly, an operating system controls the computer’s


hardware.
 The second objective is to provide an interactive interface
to the user and interpret commands so that it can
communicate with the hardware.

The operating system is very important part of almost every


computer system.
Managing Hardware

 The prime objective of operating system is to manage &


control the various hardware resources of a computer
system.
 These hardware resources include processer, memory,
and disk space and so on.
 The output result was display in monitor. In addition to
communicating with the hardware the operating system
provides on error handling procedure and display an error
notification.
 If a device not functioning properly, the operating system
cannot be communicate with the device.
Providing an Interface

 The operating system organizes application so that users


can easily access, use and store them.
 It provides a stable and consistent way for applications to
deal with the hardware without the
user having known details of the hardware.
 If the program is not functioning properly, application and
displays the appropriate error message
Computer system components are divided into 5 parts
 Computer hardware
 Operating system
 Utilities
 Application programs
 End user

The role of operating System are


 The operating system controls and programs for various
users
 It is a program that directly interacts
 The operating system is the first encoded with the
Computer and it remains on the memory all time
thereafter.
System goals
The purpose of an operating system is execute programs.
 Its primary goals are to make the computer system
convenience for the user.
 Its secondary goals are to use the computer hardware in
efficient manner.

View of operating system


 User view: The user view of the computer varies by the
interface being used. The examples are -windows XP,
vista, windows 7 etc. Most computer user sit in the in front
of personal computer (pc) in this case the operating system
is designed mostly for easy use with some attention paid
to resource utilization. Some user sit at a terminal
connected to a mainframe/minicomputer. In this case other
users are accessing the same computer through the other
terminals. There user are share resources and may
exchange the information. The operating system in this
case is designed to maximize resources utilization to
assume that all available CPU time, memory and I/O are
used efficiently and no individual user takes more than
his/her fair and share.The other users sit at workstations
connected to network of other workstations and servers.
These users have dedicated resources but they share
resources such as networking and servers like file,
compute and print server. Here the operating system is
designed to compromise between individual usability and
resource utilization.
 System view: From the computer point of view the
operating system is the program which is most
intermediate with the hardware. An operating system has
resources as hardware and software which may be required
to solve a problem like CPU time, memory space, file
storage space and I/O devices and so on. That’s why the
operating system acts as manager of these resources.
Another view of the operating system is it is a control
program. A control program manages the execution of user
programs to present the errors in proper use of the
computer. It is especially concerned of the user the
operation and controls the I/O devices.

1.1.2 Types of Operating System


1. Mainframe System: It is the system where the first
computer used to handle many commercial scientific
applications. The growth of mainframe systems traced
from simple batch system where the computer runs one
and only one application to time shared systems which
allowed for user interaction with the computer system
a) Batch /Early System: Early computers were
physically large machine. The common input devices
were card readers, tape drivers. The common output
devices were line printers, tape drivers and card
punches. In these systems the user did not interact
directly with the computer system. Instead the user
preparing a job which consists of programming data
and some control information and then submitted it to
the computer operator after some time the output is
appeared. The output in these early computer was
fairly simple is main task was to transfer control
automatically from one job to next. The operating
system always resides in the memory. To speed up
processing operators batched the jobs with similar
needs and ran then together as a group. The
disadvantages of batch system are that in this execution
environment the CPU is often idle because the speed
up of I/O devices is much slower than the CPU.

b) Multiprogrammed System: Multiprogramming


concept increases CPU utilization by organization jobs
so that the CPU always has one job to execute the idea
behind multiprogramming concept. The operating
system keeps several jobs in memory simultaneously
as shown in below figure. This set of job is subset of
the jobs kept in the job pool. The operating system
picks and beginning to execute one of the jobs in the
memory. In this environment the operating system
simply switches and executes another job. When a job
needs to wait the CPU is simply switched to another
job and so on. The multiprogramming operating
system is sophisticated because the operating system
makes decisions for the user. This is known as
scheduling. If several jobs are ready to run at the same
time the system choose one among them. This is
known as CPU scheduling. The disadvantages of the
multi programmed system are
 It does not provide user interaction with the
computer system during the program
execution.
 The introduction of disk technology solved
these problems rather than reading the cards
from card reader into disk. This form of
processing is known as spooling.
SPOOL stands for simultaneous peripheral operations
online. It uses the disk as a huge buffer for reading from
input devices and for storing output data until the output
devices accept them. It is also use for processing data at
remote sides. The remote processing is done and its own
speed with no CPU intervention. Spooling overlaps the
input, output one job with computation of other jobs.
Spooling has a beneficial effect on the performance of the
systems by keeping both CPU and I/O devices working at
much higher time.

c) Time Sharing System:The time sharing system is also


known as multi user systems. The CPU executes
multiple jobs by switching among them but the
switches occurs so frequently that the user can interact
with each program while it is running. An interactive
computer system provides direct communication
between a user and system. The user gives instruction
to the operating systems or to a program directly using
keyboard or mouse and wait for immediate results. So
the response time will be short. The time sharing
system allows many users to share the computer
simultaneously. Since each action in this system is
short, only a little CPU time is needed for each user.
The system switches rapidly from one user to the next
so each user feels as if the entire computer system is
dedicated to his use, even though it is being shared by
many users. The disadvantages of time sharing system
are:
 It is more complex than multiprogrammed
operating system
 The system must have memory management &
protection, since several jobs are kept in memory
at the same time.
 Time sharing system must also provide a file
system, so disk management is required.
It provides mechanism for concurrent execution which
requires complex CPU scheduling schemes.

2. Personal Computer System/Desktop System: Personal


computer appeared in 1970’s. They are microcomputers
that are smaller & less expensive than mainframe systems.
Instead of maximizing CPU & peripheral utilization, the
systems opt for maximizing user convenience &
responsiveness. At first file protection was not necessary
on a personal machine. But when other computers 2nd
other users can access the files on a pc file protection
becomes necessary. The lack of protection made if easy for
malicious programs to destroy data on such systems. These
programs may be self replicating& they spread via worm
or virus mechanisms. They can disrupt entire companies
or even world wide networks. E.g : windows 98, windows
2000, Linux.
3. Multiprocessor Systems/ Parallel Systems/ Tightly
coupled Systems: These Systems have more than one
processor in close communications which share the
computer bus, clock, memory & peripheral devices. Ex:
UNIX, LINUX. Multiprocessor Systems have 3 main
advantages.
 Increased throughput: No. of processes
computed per unit time. By increasing the no
of processors move work can be done in less
time. The speed up ratio with N processors is
not N, but it is less than N. Because a certain
amount of overhead is incurred in keeping all
the parts working correctly.
 Increased Reliability: If functions can be
properly distributed among several processors,
then the failure of one processor will not halt
the system, but slow it down. This ability to
continue to operate in spite of failure makes the
system fault tolerant.
 Economic scale: Multiprocessor systems can
save money as they can share peripherals,
storage & power supplies.

The various types of multiprocessing systems are:


 Symmetric Multiprocessing (SMP): Each
processor runs an identical copy of the
operating system & these copies communicate
with one another as required. Ex: Encore’s
version of UNIX for multi max computer.
Virtually, all modern operating system
including Windows NT, Solaris, Digital UNIX,
OS/2 & LINUX now provide support for SMP.
 Asymmetric Multiprocessing (Master –
Slave Processors): Each processor is designed
for a specific task. A master processor controls
the system & schedules & allocates the work to
the slave processors. Ex- Sun’s Operating
system SUNOS version 4 provides asymmetric
multiprocessing.
·

4. Distributed System/Loosely Coupled Systems: In


contrast to tightly coupled systems, the processors do not
share memory or a clock. Instead, each processor has its
own local memory. The processors communicate with
each other by various communication lines such as high
speed buses or telephone lines. Distributed systems depend
on networking for their functionalities. By being able to
communicate distributed systems are able to share
computational tasks and provide a rich set of features to
the users. Networks vary by the protocols used, the
distances between the nodes and transport media. TCP/IP
is the most common network protocol. The processor is a
distributed system varies in size and function. It may
microprocessors, work stations, minicomputer, and large
general purpose computers. Network types are based on
the distance between the nodes such as LAN (within a
room, floor or building) and WAN (between buildings,
cities or countries). The advantages of distributed system
are resource sharing, computation speed up, reliability,
communication.
5. Real time Systems: Real time system is used when there
are rigid time requirements on the operation of a processor
or flow of data. Sensors bring data to the computers. The
computer analyses data and adjusts controls to modify the
sensors inputs. System that controls scientific
experiments, medical imaging systems and some display
systems are real time systems. The disadvantages of real
time system are:

 A real time system is considered to function


correctly only if it returns the correct result
within the time constraints.
 Secondary storage is limited or missing instead
data is usually stored in short term memory or
ROM.
 Advanced OS features are absent.
Real time system is of two types such as:
 Hard real time systems: It guarantees that the
critical task has been completed on time. The
sudden task is takes place at a sudden instant of
time.
 Soft real time systems: It is a less restrictive
type of real time system where a critical task
gets priority over other tasks and retains that
priority until it computes. These have more
limited utility than hard real time systems.
Missing an occasional deadline is acceptable
e.g. QNX, VX works. Digital audio or
multimedia is included in this category.
It is a special purpose OS in which there are rigid
time requirements on the operation of a processor.
A real time OS has well defined fixed time
constraints. Processing must be done within the
time constraint or the system will fail. A real time
system is said to function correctly only if it returns
the correct result within the time constraint. These
systems are characterized by having time as a key
parameter.
1.1.3 Structure of Operating System

1. Simple structure: There are several commercial system that


don’t have a well- defined structure such operating systems
begins as small, simple & limited systems and then grow
beyond their original scope. MS-DOS is an example of such
system. It was not divided into modules carefully. Another
example of limited structuring is the UNIX operating system.

2. Layered approach: In the layered approach, the OS is


broken into a number of layers (levels) each built on top
of lower layers. The bottom layer (layer o ) is the
hardware & top most layer (layer N) is the user interface.
The main advantage of the layered approach is
modularity.
o The layers are selected such that each users
functions (or operations) & services of only lower
layer.
o This approach simplifies debugging & system
verification, i.e. the first layer can be debugged
without concerning the rest of the system. Once the
first layer is debugged, its correct functioning is
assumed while the 2nd layer is debugged & so on.
o If an error is found during the debugging of a
particular layer, the error must be on that layer
because the layers below it are already debugged.
Thus the design & implementation of the system
are simplified when the system is broken down into
layers.
o Each layer is implemented using only operations
provided by lower layers. A layer doesn’t need to
know how these operations are implemented; it
only needs to know what these operations do.
The disadvantages of layered approach are
o The main difficulty with this approach involves the
careful definition of the layers, because a layer can
use only those layers below it. For example, the
device driver for the disk space used by virtual
memory algorithm must be at a level lower than
that of the memory management routines, because
memory management requires the ability to use the
disk space.
o It is less efficient than a non layered system (Each
layer adds overhead to the system call & the net
result is a system call that take longer time than on
a non layered system).
3. Microkernel Architecture: As the operating system
evolved, the code of Operating system has bulged. The
concept of Microkernel is to essential categorize what is
needed only for the function of the operating system such
as Process management, Device management and Memory
Management into Microkernel. Then other part of the
Operating system such as File system management, Inter
processer communications and others to be build on top of
Microkernel. This way, when user want to load operating
system will be loading only essential requirements of OS
in limited space and time. Further, based on the
requirements one could load further modules.
Process and Threads in OS: Process
Management
A process is an instance of program execution. This means, for
example, that if you open up two browser windows then you have
two processes, even though they are running the same program.

1.1 Process: A process can be defined in any of the following


ways
* A process is a program in execution.
* It is an asynchronous activity.
* It is the entity to which processors are assigned.
* It is the dispatchable unit.
* It is the unit of work in a system.

A process is more than the program code. It also includes the


current activity as represented by following:
* Current value of Program Counter (PC)
* Contents of the processors registers
* Value of the variables
* The process stack which contains temporary data such as
subroutine
parameter, return address, and temporary variables.
* A data section that contains global variables.

Process in Memory: Each process is represented in the as by a


Process Control Block (PCB) also called a task control block.
Process Control Block (PCB): A process in an operating system
is represented by a data structure known as a process control block
(PCB) or process descriptor.

The PCB contains important information about the specific


process including
* The current state of the process i.e., whether it is ready,
running, waiting, or whatever.
* Unique identification of the process in order to track "which is
which" information.
* A pointer to parent process.
* Similarly, a pointer to child process (if it exists).
* The priority of process (a part of CPU scheduling information).
* Pointers to locate memory of processes.
* A register save area.
* The processor it is running on.
The life-cycle of a process can be described by a state diagram
which has states representing the execution status of the process
at various times and transitions that represent changes in execution
status.
The state diagram for a process captures its life-cycle. The states
represent the execution status of the process; the transitions
represent changes of execution state.
Each active process has its own execution status, so there is a state
diagram for each process. There are relationships between the
states of various processes that are maintained by the operating
system.
*Process state: The process state consist of everything necessary
to resume the process execution if it is somehow put aside
temporarily. The process state consists of at least following:
* Code for the program.
* Program's static data.
* Program's dynamic data.
* Program's procedure call stack.
* Contents of general purpose registers.
* Contents of program counter (PC)
* Contents of program status word (PSW).
* Operating Systems resource in use.
A process goes through a series of discrete process states.
New State: The process being created.
Running State: A process is said to be running if it has the CPU,
that is, process actually using the CPU at that particular instant.
Blocked (or waiting) State: A process is said to be blocked if it
is waiting for some event to happen such that as an I/O completion
before it can proceed. Note that a process is unable to run until
some external event happens.
Ready State: A process is said to be ready if it use a CPU if one
were available. A ready state process is runable but temporarily
stopped running to let another process run.
Terminated state: The process has finished execution.
CPU Switch from Process to Process

Process Scheduling Queues

Job queue – set of all processes in the system


Ready queue – set of all processes residing in main memory,
ready and waiting to execute
Device queues – set of processes waiting for an I/O device
Processes migrate among the various queues
Ready Queue and Various I/O Device Queues

Representation of Process Scheduling


1.2 Schedulers: A process migrates among various scheduling
queues throughout its lifetime. The OS must select for scheduling
purposes, processes from those queues in some fashion. The
selection process is carried out by the appropriate scheduler.
Long Term Scheduler: A long term scheduler or job scheduler
selects processes from job pool (mass storage device, where
processes are kept for later execution) and loads them into
memory for execution. The long term scheduler controls the
degree of multiprogramming (the number of processes in
memory).
I/O-bound process – spends more time doing I/O than
computations, many short CPU bursts
CPU-bound process – spends more time doing computations; few
very long CPU bursts

Short Term Scheduler: A short term scheduler or CPU scheduler


selects from the main memory among the processes that are ready
to execute and allocates the CPU to one of them.

Medium Term Scheduler: The medium term scheduler available


in all systems which is responsible for the swapping in and out
operations which means loading the process into, main memory
from secondary memory (swap in) and take out the process from
main memory and store it into the secondary memory (swap out).

Medium Term Scheduler


Dispatcher:

* It is the module that gives control of the CPU to the process


selected by the short-term scheduler.
* Functions of Dispatcher: Switching context, Switching to user
mode, and Jumping to the proper location in the user program to
restart that program.

1.3 The fork():


System call fork() is used to create processes. It takes no
arguments and returns a process ID. The purpose of fork() is to
create a new process, which becomes the child process of the
caller. After a new child process is created, both processes will
execute the next instruction following the fork() system call.
Therefore, we have to distinguish the parent from the child. This
can be done by testing the returned value of fork():
* If fork() returns a negative value, the creation of a child process
was unsuccessful.
* fork() returns a zero to the newly created child process.
* fork() returns a positive value, the process ID of the child
process, to the parent. The returned process ID is of type pid_t
defined in <sys/types.h>. Normally, the process ID is an integer.
Moreover, a process can use function getpid() to retrieve the
process ID assigned to this process.
Therefore, after the system call to fork(), a simple test can tell
which process is the child.

Note : Please note that Unix will make an exact copy of the
parent's address space and give it to the child. Therefore, the
parent and child processes have separate address spaces.

Example:
*Calculate a number of times hello is printed.*
#include <stdio.h>
#include <sys/types.h>
int main()
{
fork();
fork();
fork();
printf("hello\n");
return 0;
}
*Solutions:*
Number of times hello printed is equal to number of processes
created. Total Number of Processes = 2^n where n is the number
of fork system calls. So here n = 3, 2^3 = 8.

C Program Forking Separate Process


#inlcude<stdio.h>
#include<sys/types.h>
#include<unistd.h>
int main()
{
pid_t pid;
/* fork another process */
pid = fork();
if (pid < 0) { /* error occurred */
fprintf(stderr, "Fork Failed");
exit(-1);
}
else if (pid == 0) { /* child process */
execlp("/bin/ls", "ls", NULL);
}
else { /* parent process */
/* parent will wait for the child to complete */
wait (NULL);
printf ("Child Complete");
exit(0);
}
}

Interprocess Communication
 Processes within a system may be independent or
cooperating
 Cooperating process can affect or be affected by other
processes, including sharing data
 Reasons for cooperating processes:
 Information sharing
 Computation speedup
 Modularity
 Convenience
 Cooperating processes need interprocess communication
(IPC)
 Two models of IPC
 Shared memory
 Message passing

Communications Models

Cooperating Processes
Independent process cannot affect or be affected by the execution
of another process
Cooperating process can affect or be affected by the execution of
another process

1.4 What are threads?

Thread: A thread is a single sequence stream within in a process.


Because threads have some of the properties of processes, they are
sometimes called “lightweight processes”. In a process, threads
allow multiple executions of streams. An operating system that
has thread facility, the basic unit of CPU utilization is a thread.
 A thread is a basic unit of CPU utilization, consisting of a
program counter, a stack, and a set of registers, ( and a
thread ID. )
 Traditional ( heavyweight ) processes have a single thread
of control - There is one program counter, and one
sequence of instructions that can be carried out at any
given time.
 As shown in Figure 1 below, multi-threaded applications
have multiple threads within a single process, each having
their own program counter, stack and set of registers, but
sharing common code, data, and certain structures such as
open files.

Figure 1: Single and Multi-Threaded Process

Motivation

Threads are very useful in modern programming whenever a


process has multiple tasks to perform independently of the others.

This is particularly true when one of the tasks may block, and it is
desired to allow the other tasks to proceed without blocking.
For example in a word processor, a background thread may check
spelling and grammar while a foreground thread processes user
input ( keystrokes ), while yet a third thread loads images from the
hard drive, and a fourth does periodic automatic backups of the
file being edited.

Another example is a web server - Multiple threads allow for


multiple requests to be satisfied simultaneously, without having to
service requests sequentially or to fork off separate processes for
every incoming request. ( The latter is how this sort of thing was
done before the concept of threads was developed. A daemon
would listen at a port, fork off a child for every incoming request
to be processed, and then go back to listening to the port. )

Figure 2: Multi-threaded Server Architecture.

1.5 Benefits

There are four major categories of benefits to multi-threading:


Responsiveness - One thread may provide rapid response while
other threads are blocked or slowed down doing intensive
calculations.
Resource sharing - By default threads share common code, data,
and other resources, which allows multiple tasks to be performed
simultaneously in a single address space.
Economy - Creating and managing threads ( and context switches
between them ) is much faster than performing the same tasks for
processes.
Scalability, i.e. Utilization of multiprocessor architectures - A
single threaded process can only run on one CPU, no matter how
many may be available, whereas the execution of a multi-threaded
application may be split amongst available processors. ( Note that
single threaded processes can still benefit from multi-processor
architectures when there are multiple processes contending for the
CPU, i.e. when the load average is above some certain threshold.
)

Multicore Programming
A recent trend in computer architecture is to produce chips with
multiple cores, or CPUs on a single chip.
A multi-threaded application running on a traditional single-core
chip would have to interleave the threads, as shown in Figure 3.
On a multi-core chip, however, the threads could be spread across
the available cores, allowing true parallel processing, as shown in
Figure 4.

Figure 3: Concurrent Execution of single core

Figure 4: Parallel Execution on multi-core

For operating systems, multi-core chips require new scheduling


algorithms to make better use of the multiple cores available.
As multi-threading becomes more pervasive and more important (
thousands instead of tens of threads), CPUs have been developed
to support more simultaneous threads per core in hardware.

For application programmers, there are five areas where multi-


core chips present new challenges:
Identifying tasks - Examining applications to find activities that
can be performed concurrently.
Balance - Finding tasks to run concurrently that provide equal
value, i.e. don't waste a thread on trivial tasks.
Data splitting - To prevent the threads from interfering with one
another.
Data dependency - If one task is dependent upon the results of
another, then the tasks need to be synchronized to assure access in
the proper order.
Testing and debugging - Inherently more difficult in parallel
processing situations, as the race conditions become much more
complex and difficult to identify.

Types of Parallelism
In theory there are two different ways to parallelize the workload:
 Data parallelism divides the data up amongst multiple
cores ( threads ), and performs the same task on each
subset of the data. For example dividing a large image up
into pieces and performing the same digital image
processing on each piece on different cores.
 Task parallelism divides the different tasks to be
performed among the different cores and performs them
simultaneously.
In practice no program is ever divided up solely by one or the other
of these, but instead by some sort of hybrid combination.
1.6 Multithreading Models
There are two types of threads to be managed in a modern system:
User threads and kernel threads.
 User threads are supported above the kernel, without
kernel support. These are the threads that application
programmers would put into their programs.
 Kernel threads are supported within the kernel of the OS
itself. All modern OSes support kernel level threads,
allowing the kernel to perform multiple simultaneous tasks
and/or to service multiple kernel system calls
simultaneously.
In a specific implementation, the user threads must be mapped to
kernel threads, using one of the following strategies.

1) Many-To-One Model
In the many-to-one model, many user-level threads are all mapped
onto a single kernel thread.
Thread management is handled by the thread library in user space,
which is very efficient.
However, if a blocking system call is made, then the entire process
blocks, even if the other user threads would otherwise be able to
continue.
Because a single kernel thread can operate only on a single CPU,
the many-to-one model does not allow individual processes to be
split across multiple CPUs.
Green threads for Solaris and GNU Portable Threads implement
the many-to-one model in the past, but few systems continue to do
so today.
Many-to-One Model

2) One-To-One Model
The one-to-one model creates a separate kernel thread to handle
each user thread.
One-to-one model overcomes the problems listed above involving
blocking system calls and the splitting of processes across
multiple CPUs.
However the overhead of managing the one-to-one model is more
significant, involving more overhead and slowing down the
system.
Most implementations of this model place a limit on how many
threads can be created.
Linux and Windows from 95 to XP implement the one-to-one
model for threads.
One-to-One Model

3) Many-To-Many Model
The many-to-many model multiplexes any number of user threads
onto an equal or smaller number of kernel threads, combining the
best features of the one-to-one and many-to-one models.
Users have no restrictions on the number of threads created.
Blocking kernel system calls do not block the entire process.
Processes can be split across multiple processors.
Individual processes may be allocated variable numbers of kernel
threads, depending on the number of CPUs present and other
factors.

Many-to-Many Model
One popular variation of the many-to-many model is the two-tier
model, which allows either many-to-many or one-to-one
operation.
IRIX, HP-UX, and Tru64 UNIX use the two-tier model, as did
Solaris prior to Solaris 9.

Two-tier Model

1.7 Thread Libraries


Thread libraries provide programmers with an API for creating
and managing threads.
Thread libraries may be implemented either in user space or in
kernel space. The former involves API functions implemented
solely within user space, with no kernel support. The latter
involves system calls, and requires a kernel with thread library
support.

There are three main thread libraries in use today:


 POSIX Pthreads - may be provided as either a user or
kernel library, as an extension to the POSIX standard.
 Win32 threads - provided as a kernel-level library on
Windows systems.
 Java threads - Since Java generally runs on a Java Virtual
Machine, the implementation of threads is based upon
whatever OS and hardware the JVM is running on, i.e.
either Pthreads or Win32 threads depending on the system.

 The following sections will demonstrate the use of threads


in all three systems for calculating the sum of integers from
0 to N in a separate thread, and storing the result in a
variable "sum".

Pthreads
 The POSIX standard ( IEEE 1003.1c ) defines the
specification for pThreads, not the implementation.
 pThreads are available on Solaris, Linux, Mac OSX,
Tru64, and via public domain shareware for Windows.
 Global variables are shared amongst all threads.
 One thread can wait for the others to rejoin before
continuing.
pThreads begin execution in a specified function, in this example
the runner( ) function:
Implicit Threading ( Optional )
Shifts the burden of addressing the programming challenges
outlined above from the application programmer to the compiler
and run-time libraries.

1.8 Thread Pools


 Creating new threads every time one is needed and then
deleting it when it is done can be inefficient, and can also
lead to a very large ( unlimited ) number of threads being
created.
 An alternative solution is to create a number of threads
when the process first starts, and put those threads into a
thread pool.
 Threads are allocated from the pool as needed, and
returned to the pool when no longer needed.
 When no threads are available in the pool, the process may
have to wait until one becomes available.
 The ( maximum ) number of threads available in a thread
pool may be determined by adjustable parameters,
possibly dynamically in response to changing system
loads.
 Win32 provides thread pools through the "PoolFunction"
function. Java also provides support for thread pools
through the java.util.concurrent package, and Apple
supports thread pools under the Grand Central Dispatch
architecture.
1.9 Threading Issues
The fork( ) and exec( ) System Calls

Q: If one thread forks, is the entire process copied, or is the new


process single-threaded?
A: System dependant.
A: If the new process execs right away, there is no need to copy
all the other threads. If it doesn't, then the entire process should be
copied.
A: Many versions of UNIX provide multiple versions of the fork
call for this purpose.

Signal Handling
Q: When a multi-threaded process receives a signal, to what thread
should that signal be delivered?
A: There are four major options:
Deliver the signal to the thread to which the signal applies.
Deliver the signal to every thread in the process.
Deliver the signal to certain threads in the process.
Assign a specific thread to receive all signals in a process.
The best choice may depend on which specific signal is involved.
UNIX allows individual threads to indicate which signals they are
accepting and which they are ignoring. However the signal can
only be delivered to one thread, which is generally the first thread
that is accepting that particular signal.
UNIX provides two separate system calls, kill( pid, signal ) and
pthread_kill( tid, signal ), for delivering signals to processes or
specific threads respectively.
Windows does not support signals, but they can be emulated using
Asynchronous Procedure Calls ( APCs ). APCs are delivered to
specific threads, not processes.
1.10 Thread Cancellation
Threads that are no longer needed may be cancelled by another
thread in one of two ways:
Asynchronous Cancellation cancels the thread immediately.
Deferred Cancellation sets a flag indicating the thread should
cancel itself when it is convenient. It is then up to the cancelled
thread to check this flag periodically and exit nicely when it sees
the flag set.
( Shared ) resource allocation and inter-thread data transfers can
be problematic with asynchronous cancellation.

1.11 Thread-Local Storage


Most data is shared among threads, and this is one of the major
benefits of using threads in the first place.
However sometimes threads need thread-specific data also.
Most major thread libraries ( pThreads, Win32, Java ) provide
support for thread-specific data, known as thread-local storage or
TLS. Note that this is more like static data than local
variables,because it does not cease to exist when the function ends.

1.12 Scheduler Activations


Many implementations of threads provide a virtual processor as
an interface between the user thread and the kernel thread,
particularly for the many-to-many or two-tier models.
This virtual processor is known as a "Lightweight Process", LWP.
There is a one-to-one correspondence between LWPs and kernel
threads.
The number of kernel threads available, ( and hence the number
of LWPs ) may change dynamically.
The application ( user level thread library ) maps user threads onto
available LWPs.
kernel threads are scheduled onto the real processor(s) by the OS.
The kernel communicates to the user-level thread library when
certain events occur ( such as a thread about to block ) via an
upcall, which is handled in the thread library by an upcall handler.
The upcall also provides a new LWP for the upcall handler to run
on, which it can then use to reschedule the user thread that is about
to become blocked. The OS will also issue upcalls when a thread
becomes unblocked, so the thread library can make appropriate
adjustments.
If the kernel thread blocks, then the LWP blocks, which blocks the
user thread.
Ideally there should be at least as many LWPs available as there
could be concurrently blocked kernel threads. Otherwise if all
LWPs are blocked, then user threads will have to wait for one to
become available.

Light Weight Process(LWP)


Unix System Calls

1. Open System Call


The function establishes connection between process and a file. The
open() system call opens the file specified by pathname. If the specified
file does not exist, it may optionally (if O_CREAT is specified in flags)
be created by open(). The return value of open() is a file descriptor, a
small, nonnegative integer that is used in subsequent system calls
(read(2), write(2), lseek(2), fcntl(2), etc.) to refer to the open file.
The prototype of the function
#include <sys/types.h>
#include <fcntl.h>
int open (const char *pathname, int access_mode , mode_t
permission);
Return Value : integer value file Descriptor on success & -1 on failure
Arguments:
Pathname : It can be absolute path name or a relative path name
Access_mode : An integer which specifies how file is to be accessed
by calling process
Permission: The mode argument specifies the file mode bits be applied
when a new file is created. This argument must be supplied when
O_CREAT is specified in flags

Access mode flag Use


O_RDONLY Opens file for read-only
O_WRONLY Opens file for write-only
O_RDWR Opens file for read & write
Access modifier flags: Access modifier flags can be used along with
access mode flags for additional functionality
O_APPEND : appends data to end of file
O_TRUNC : if the file already exists, discards its contents and sets
file size to zero
O_CREAT : creates the file if it does not exist
O_EXCL : used with O_CREAT only. This flag causes open to
fail if the file exists
O_NONBLOCK : specifies that any subsequent read or write on the
file should be non- blocking
O_NOCTTY : specifies not to use the named terminal device file as
the calling process control terminal

Example program:

#include<stdio.h>
int main()
{
int fd;
if((fd=open(“file.dat”))==-1)
{
perror(“cannot open the file.dat”);
exit(0);
}
else
printf(“\n FILE OPENED SUCCESSSFULLY”);
return 0;
}
2. Read
This function fetches a fixed size block of data from a file referenced
by a given file descriptor
#include <sys/types.h>
#include <unistd.h>
ssize_t read (int fdesc ,void* buf, size_t size);
Return Value : Number of bytes read successfully & -1 on failure or
reaching End-of-file
Arguments:
fdesc : The file descriptor of a file from where the contents are to be
read.
Buf: The buffer where the read contents are to be stored
Size: The number of bytes to be read.

Example program:
#include<stdio.h>
main()
{
char b[20];
int fd,xr;
if((fd=open(“write”,0))==-1)
{
printf(“cannot open file”);
exit(1);
}
do
{
xr=read(fd,b,20);
b[xr]=’\0’;
printf(“%s”,b);
}
while(xr==20);
close(fd);
}
3. Write
The write function puts a fixed size block of data to a file referenced
by a file descriptor
#include <sys/types.h>
#include <unistd.h>
ssize_t read (int fdesc ,void* buf, size_t size);
Return Value : Number of bytes written successfully & -1 on failure
or reaching End-of-file
Arguments:
fdesc : The file descriptor of a file to which the contents are to be
written.
Buf: The buffer where the contents to be written are to stored
Size: The number of bytes to be read.

Example program:

#include<stdio.h>
main(int ac,char*av[])
{
int fd;
int i=1;
char*sep=” “;
if(ac<1)
{
printf(“\n INSUFFICIENT ARGUMENTS");
exit(1);
}
if((fd=open(“file.dat”,0660))==-1)
{
printf(“\n CANNOT CREATE THE FILE”);
exit(1);
}
while(i<ac)
{
write(fd,av[i],(unsigned)strlen(av[i]));
write(fd,sep,(unsigned)strlen(sep));
i++;
}
close(fd);
}

4. Close
Disconnects a file from a process. Close function will deallocate
system resources
#include <unistd.h>
int close (int fdesc);
Return Value : 0 on success & -1 on failure
Arguments:
fdesc : The file descriptor which is to be closed

5. Link
The link function creates a new link for existing file i.e make a new
name for a file.
Prototype :
#include <unistd.h>
int link (const char* cur_link ,const char* new_link)
Return Value : 0 on success & -1 on failure
Arguments:
cur_link : Path name of the existing file
new_link: New path name
6. unlink
Delete a name and possibly the file it refers to
#include <unistd.h>
int unlink (const char* cur_link );
Return Value : 0 on success & -1 on failure
Arguments:
cur_link : Path name of the file to delete
Example program: link & unlink
#include <unistd.h>
int main(int argc, char* argv[]) {

int input_fd, output_fd; /* Input and


output file descriptors */

/* Are src and dest file name arguments


missing */
if(argc != 3){
printf ("Usage: mv file1 file2");
return 1;
}

/* Create input file descriptor */


input_fd = link(argv [1], argv[2]);
if (input_fd == -1) {
perror ("link error");
return 2;
}

/* Create output file descriptor */


output_fd = unlink(argv[1]);
if(output_fd == -1){
perror("unlink");
return 3;
}
7. Stat, fstat
stat() and fstatat() retrieve information about the file pointed to by
path_name,in the buffer pointed to by statv.
#include <sys/types.h>
#include <unistd.h>
int stat (const char* path_name,struct stat* statv)
int fstat (const int fdesc,struct stat* statv)
Return Value : 0 on success & -1 on failure
Arguments:
path_name: Path name of the file
fdesc: file descriptor of the file
statv: buffer to store the statistics of the file
The definition of structure struct stat is:
Struct stat
{
dev_ts t_dev;
ino_t st_ino;
mode_t st_mode;
nlink_t st_nlink;
uid_t st_uid;
gid_t st_gid;
dev_t st_rdev;
off_t st_size;
time_t st_mtime
time_t st_ctime
};
Example program:

#include <stdio.h>
#include <unistd.h>
#include <sys/stat.h>
#include <time.h>
void printFileProperties(struct stat stats);
int main()
{
char path[100];
struct stat stats;
printf("Enter source file path: ");
scanf("%s", path);
if (stat(path, &stats) == 0)
{
printFileProperties(stats);
}
else
{
printf("Unable to get file properties.\n");
printf("Please check whether '%s' file exists.\n", path);
}
return 0;
}
void printFileProperties(struct stat stats)
{
struct tm dt;
printf("\nFile access: ");
if (stats.st_mode & R_OK)
printf("read ");
if (stats.st_mode & W_OK)
printf("write ");
if (stats.st_mode & X_OK)
printf("execute");
printf("\nFile size: %d", stats.st_size);
dt = *(gmtime(&stats.st_ctime));
printf("\nCreated on: %d-%d-%d %d:%d:%d", dt.tm_mday,
dt.tm_mon, dt.tm_year + 1900,
dt.tm_hour, dt.tm_min, dt.tm_sec);
dt = *(gmtime(&stats.st_mtime));
printf("\nModified on: %d-%d-%d %d:%d:%d", dt.tm_mday,
dt.tm_mon, dt.tm_year + 1900,
dt.tm_hour, dt.tm_min, dt.tm_sec);
}
8. fcntl file locking
The function helps to query or set access control flags and the close-
on-exec flag of any file descriptor. It can also be used for locking and
unlocking of files as follows:
Syntax:
#include<fcntl.h>
int fcntl (int fdesc, int cmd_flag, …);
Return Value : 0 on success & -1 on failure
Arguments:
fdesc: file descriptor of the file
Cmd_flag : command or action to be performed

Cmd_flag
F_SETLK - Sets a file lock. Do not block if this cannot succeed
immediately
F_SETLKW - Sets a file lock and blocks the calling process until
the lock is acquired
F_GETLK - Queries as to which process locked a specified region
of a file

For file locking the third argument is struct flock-typed variable.


struct flock
{
short l_type;
short l_whence;
off_t l_start;
off_t l_len;
pid_t l_pid;
};
l_type and l_whence fields of flock

l_type value Use


F_RDLCK Sets as a read (shared) lock on a specified region
F_WRLCK Sets a write (exclusive) lock on a specified region
F_UNLCK Unlocks a specified region

l_whence value Use


SEEK_CUR The l_start value is added to the current file pointer
address
SEEK_SET The l_start value is added to byte 0 of file
SEEK_END The l_start value is added to the end (current size)
of the file

The l_len specifies the size of a locked region beginning from the start
address defined by l_whence and l_start. If l_len is 0 then the length of
the locked is imposed on the maximum size and also as it extends. It
cannot have a –ve value.
When fcntl is called, the variable contains the region of the file locked
and the ID of the process that owns the locked region. This is returned
via the l_pid field of the variable.
Example program:
#include <unistd.h>
#include<fcntl.h>
int main ( ) {
int fd;
struct flock lock;
fd=open(“foo.txt”,O_RDONLY);
lock.l_type=F_RDLCK;
lock.l_whence=0;
lock.l_start=10;
lock.l_len=15;
fcntl(fd,F_SETLK,&lock);
fvar.l_type = F_UNLCK;
fvar.l_whence = SEEK_SET;
fvar.l_start = 0;
fvar.l_len = 0;
if((fcntl(fd,F_UNLCK,&fvar))==-1)
{
perror("fcntl");
exit(0);
}
printf("Unlocked\n");
close(fd);

}
9. Opendir
This opens the file for read-only Readdir. he opendir() function
opens a directory stream corresponding to the directory name, and
returns a pointer to the directory stream. The stream is positioned
at the first entry in the directory.
DIR*opendir (const char* path_name);
Return value: a pointer to the directory streamon success and on
error, NULL is returned,
Argument:
Path_name: path name of directory to be opened

10.Readdir
The readdir() function returns a pointer to a dirent structure
representing the next directory entry in the directory stream pointed to
by dirp.
Dirent* readdir(DIR* dir_fdesc);
returns: NULL on reaching the end of the directory stream or if an
error occurred.
Arguments:
dir_fdesc: value is the DIR* return value from an opendir call.
In the glibc implementation, the dirent structure is defined as
follows:
struct dirent {
ino_t d_ino; /* Inode number */
off_t d_off; /* Not an offset; see below */
unsigned short d_reclen; /* Length of this record */
unsigned char d_type; /* Type of file; not
supported
by all filesystem types */
char d_name[256]; /* Null-terminated filename
*/
};

11.closedir
It terminates the connection between the dir_fdesc handler and a
directory file.
int closedir (DIR* dir_fdesc);
returns: 0 on Success and -1 on Failure.
Arguments:
dir_fdesc: value is the DIR* return value from an opendir call.

Example program: opendir, readdir, closedir

#include<stdio.h>
#include<dirent.h>
main(int argc, char **argv)
{
DIR *dp;
struct dirent *link;
dp=opendir(argv[1]);
printf(“\n contents of the directory %s are
\n”, argv[1]);
while((link=readdir(dp))!=0)
printf(“%s”,link->d_name);
closedir(dp);
}
12.Fork
fork() creates a new process by duplicating the calling process. The
new process is referred to as the child process. The calling process is
referred to as the parent process.
#include <sys/types.h>
#include <unistd.h>
pid_t fork(void);

On success, the PID of the child process is returned in the parent, and 0
is returned in the child. On failure, -1 is returned in the parent, no
child process is created

13.exec
The exec() family of functions replaces the current process image with
a new process image.
#include <unistd.h>
extern char **environ;
int execl(const char *pathname, const char *arg, ... /* (char *)
NULL */);
int execlp(const char *file, const char *arg, ... /* (char *)
NULL */);
int execle(const char *pathname, const char *arg, ... /*, (char
*) NULL, char * const envp[] */);
int execv(const char *pathname, char *const argv[]);
int execvp(const char *file, char *const argv[]);
int execvpe(const char *file, char *const argv[], char *const
envp[]);
Return Value: The exec() functions return only if an error has
occurred. The return value is -1
The functions can be grouped based on the letters following the "exec"
prefix.
l - execl(), execlp(), execle()
The const char *arg and subsequent ellipses can be thought of as arg0,
arg1, ..., argn. Together they describe a list of one or more pointers to
null-terminated strings that represent the argument list available to the
executed program. The first argument, by convention, should point to
the filename associated with the file being executed. The list of
arguments must be terminated by a null pointer, and, since these are
variadic functions, this pointer must be cast (char *) NULL.
v - execv(), execvp(), execvpe()
The char *const argv[] argument is an array of pointers to null-
terminated strings that represent the argument list available to the new
program. The first argument, by convention, should point to the
filename associated with the file being executed. The array of pointers
must be terminated by a null pointer.
e - execle(), execvpe()
The environment of the caller is specified via the argument envp. The
envp argument is an array of pointers to null-terminated strings and
must be terminated by a null pointer. All other exec() functions (which
do not include 'e' in the suffix) take the environment for the new
process image from the external variable environ in the calling process.
p - execlp(), execvp(), execvpe()
These functions duplicate the actions of the shell in searching for an
executable file if the specified filename does not contain a slash (/)
character.
14.wait
Parent wait for state changes in a child of the calling process, and
obtain information about the child whose state has changed. A
state change is considered to be: the child terminated; the child was
stopped by a signal; or the child was resumed by a signal. In the case
of a terminated child, performing a wait allows the system to release
the resources associated with the child; if a wait is not performed, then
the terminated child remains in a "zombie" state
#include <sys/types.h>
#include <sys/wait.h>
pid_t wait(int *wstatus);
Return value: on success, returns the process ID of the terminated
child; on error, -1 is returned
Argument
wstatus: state of terminated child
WIFEXITED(wstatus)
returns true if the child terminated normally, that is, by
calling exit(3) or _exit(2), or by returning from main().
WEXITSTATUS(wstatus)
returns the exit status of the child. This consists of the
least significant 8 bits of the status argument that the child
specified in a call to exit(3) or _exit(2) or as the argument
for a return statement in main(). This macro should be
employed only if WIFEXITED returned true.
WIFSIGNALED(wstatus)
returns true if the child process was terminated by a signal.
WTERMSIG(wstatus)
returns the number of the signal that caused the child process
to terminate. This macro should be employed only if
WIFSIG‐
NALED returned true.
WCOREDUMP(wstatus)
returns true if the child produced a core dump (see core(5)).
This macro should be employed only if WIFSIGNALED
returned
true. This macro is not specified in POSIX.1-2001 and is not
avail‐
able on some UNIX implementations (e.g., AIX, SunOS).
There‐
fore, enclose its use inside #ifdef WCOREDUMP ... #endif.
WIFSTOPPED(wstatus)
returns true if the child process was stopped by delivery of a
signal; this is possible only if the call was done using WUN‐
TRACED or when the child is being traced (see ptrace(2)).
WSTOPSIG(wstatus)
returns the number of the signal which caused the child to
stop. This macro should be employed only if WIFSTOPPED
returned true.
WIFCONTINUED(wstatus)
(since Linux 2.6.10) returns true if the child process was
resumed by delivery of SIGCONT.

15.Process ID
Every process has a unique process ID, a non negative integer.
Following functions can be used to print various IDs of process
#include <unistd.h>
#include <sys/types.h>
pid_t getpid (void);
pid_t getppid (void);
uid_t getuid (void);
uid_t geteuid (void);
gid_t getgid (void);
gid_t getegid (void);

16.exit
exit performs certain cleanup processing and then returns to kernel

#include <stdlib.h>
void _exit (int status)
void exit (int status)

Example program: fork, exec, wait, exit, getpid


#include<stdio.h>
#include<string.h>
#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#include<wait.h>

int main(int argc, char *argv[])


{
printf("Inside main\n");
int res=1;
pid_t pid=fork();
//parent and child start execution from the
next instruction
if(pid<0)
{
printf("Error generated\n");
}
if(pid==0)
{
printf("Inside child
proces,PID=%d\n",getpid());

execl("./encrypt","encry",argv[1],argv[2],NULL)
;//second arg is just a reference name
}
else{
printf("Inside parent process ID
=%d\n",getpid());
wait(&res);
if(WIFEXITED(res)==1)
{
printf("Terminates normally\n");
}
else{
printf("AbNormal termination");
exit(0);
}

}
}
Linux Case Study

Design Principles

Linux resembles other traditional, non-microkernel UNIX


implementations. It is a multiuser, preemptively multitasking
system with a full set of UNIX-compatible tools. Linux’s file
system adheres to traditional UNIX semantics, and the standard
UNIX networking model is fully implemented. The internal
details of Linux’s design have been influenced heavily by the
history of this operating system’s development.

Components of a Linux System


The Linux system is composed of three main bodies of code, in
line with most traditional UNIX implementations:
1. Kernel. The kernel is responsible for maintaining all the
important abstractions of the operating system, including
such things as virtual memory and processes.
2. System libraries. The system libraries define a standard set
of functions through which applications can interact with
the kernel. These functions implement much of the
operating-system functionality that does not need the full
privileges of kernel code.
3. System utilities. The system utilities are programs that
perform individual, specialized management tasks
Figure below illustrates the various components that make up a
full Linux system. The most important distinction here is between
the kernel and everything else. All the kernel code executes in the
processor’s privileged mode with full access to all the physical
resources of the computer. Linux refers to this privileged mode as
kernel mode. Under Linux, no user code is built into the kernel.
Any operating-system-support code that does not need to run in
kernel mode is placed into the system libraries and runs in user
mode. Unlike kernel mode, user mode has access only to a
controlled subset of the system’s resources.
Fig : Components of a Linux System

The operating systems have adopted a message passing


architecture for their kernel internals, Linux retains UNIX’s
historical model: the kernel is created as a single, monolithic
binary. The main reason is performance. Because all kernel code
and data structures are kept in a single address space, no context
switches are necessary when a process calls an operating-system
function or when a hardware interrupt is delivered. Moreover, the
kernel can pass data and make requests between various
subsystems using relatively cheap C function invocation and not
more complicated interprocess communication (IPC). This single
address space contains not only the core scheduling and virtual
memory code but all kernel code, including all device drivers, file
systems, and networking code.

The system libraries provide many types of functionality. At the


simplest level, they allow applications to make system calls to the
Linux kernel. Making a system call involves transferring control
from unprivileged user mode to privileged kernel mode; the
details of this transfer vary from architecture to architecture. The
libraries take care of collecting the system-call arguments and, if
necessary, arranging those arguments in the special form
necessary to make the system call.

The Linux system includes a wide variety of user-mode


programs—both system utilities and user utilities. The system
utilities include all the programs necessary to initialize and then
administer the system, such as those to set up networking
interfaces and to add and remove users from the system. User
utilities are also necessary to the basic operation of the system but
do not require elevated privileges to run.
Kernel Modules
Kernel modules allow a Linux system to be set up with a standard
minimal kernel, without any extra device drivers built in. Any
device drivers that the user needs can be either loaded explicitly
by the system at startup or loaded automatically by the system on
demand and unloaded when not in use.

The module support under Linux has four components:


1. The module-management system allows modules to be
loaded into memory and to communicate with the rest of
the kernel.
2. The module loader and unloader, which are user-mode
utilities, work with the module-management system to
load a module into memory.
3. The driver-registration system allows modules to tell the
rest of the kernel that a new driver has become available.
4. A conflict-resolution mechanism allows different device
drivers to reserve hardware resources and to protect those
resources from accidental use by another driver.

Module-management
Loading a module requires more than just loading its binary
contents into kernel memory. The system must also make sure that
any references the module makes to kernel symbols or entry points
are updated to point to the correct locations in the kernel’s address
space. Linux deals with this reference updating by splitting the job
of module loading into two separate sections: the management of
sections of module code in kernel memory and the handling of
symbols that modules are allowed to reference.
The loading of the module is performed in two stages. First, the
moduleloader utility asks the kernel to reserve a continuous area
of virtual kernel memory for the module. The kernel returns the
address of the memory allocated, and the loader utility can use this
address to relocate the module’s machine code to the correct
loading address. A second system call then passes the module,
plus any symbol table that the new module wants to export, to the
kernel. The final module-management component is the module
requester. The kernel defines a communication interface to which
a module-management program can connect.

Driver Registration
Once a module is loaded, it remains no more than an isolated
region of memory until it lets the rest of the kernel know what new
functionality it provides. The kernel maintains dynamic tables of
all known drivers and provides a set of routines to allow drivers to
be added to or removed from these tables at any time.
A device driver might want to register two separate mechanisms
for accessing the device. Registration tables include, among
others, the following items:
 Device drivers. These drivers include character devices
(such as printers, terminals, and mice), block devices
(including all disk drives), and network interface devices.
 File systems. The file system may be anything that
implements Linux’s virtual file system calling routines.
 Network protocols. A module may implement an entire
networking protocol, such as TCP or simply a new set of
packet-filtering rules for a network firewall.
 Binary format. This format specifies a way of recognizing,
loading, and executing a new type of executable file.

Conflict Resolution
Linux provides a central conflict-resolution mechanism to help
arbitrate access to certain hardware resources. Its aims are as
follows:
 To prevent modules from clashing over access to hardware
resources.
 To prevent auto probes—device-driver probes that auto-
detect device configuration—from interfering with
existing device drivers.
 To resolve conflicts among multiple drivers trying to
access the same hardware—as, for example, when both the
parallel printer driver and the parallel line IP (PLIP)
network driver try to talk to the parallel port
To these ends, the kernel maintains lists of allocated hardware
resources. The PC has a limited number of possible I/O ports
(addresses in its hardware I/O address space), interrupt lines, and
DMA channels. When any device driver wants to access such a
resource, it is expected to reserve the resource with the kernel
database first. This requirement incidentally allows the system
administrator to determine exactly which resources have been
allocated by which driver at any given point.

You might also like