Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

BOWEN UNIVERSITY, IWO

COLLEGE OF COMPUTING AND COMMUNICATION STUDIES


DEPARTMENT OF COMPUTER SCIENCE
CSC 322- OPERATING SYSTEM II
LECTURE NOTE 2

Concurrency: Processes and Threads


Preamble
The most central concept in any operating system is the process: an abstraction of
a running program. Everything else hinges on this concept, and the operating
system designer (and student) should have a thorough understanding of what a
process is as early as possible.
Processes are one of the oldest and most important abstractions that operating
systems provide. They support the ability to have (pseudo) concurrent operation
even when there is only one CPU available. They turn a single CPU into multiple
virtual CPUs. Without the process abstraction, modern computing could not exist.

Processes
All modern computers often do several things at the same time. People used to
working with computers may not be fully aware of this fact, so a few examples
may make the point clearer. First, consider a Web server. Requests come in from
all over asking for Web pages. When a request comes in, the server checks to see if
the page needed is in the cache. If it is, it is sent back; if it is not, a disk request is
started to fetch it. However, from the CPU’s perspective, disk requests take
eternity.
While waiting for a disk request to complete, many more requests may come in. If
there are multiple disks present, some or all of the newer ones may be fired off to
other disks long before the first request is satisfied. Clearly, some way is needed to
model and control this concurrency. Processes (and especially threads) can help
here.
Now consider a user PC. When the system is booted, many processes are secretly
started, often unknown to the user. For example, a process may be started up to
wait for incoming email. Another process may run on behalf of the antivirus
program to check periodically if any new virus definitions are available. In
addition, explicit user processes may be running, printing files and backing up the
user’s photos on a USB stick, all while the user is surfing the Web. All this activity
has to be managed, and a multiprogramming system supporting multiple processes
comes in very handy here.
In any multiprogramming system, the CPU switches from process to process
quickly, running each for tens or hundreds of milliseconds. While, strictly
speaking, at any one instant the CPU is running only one process, in the course of
1 second it may work on several of them, giving the illusion of parallelism.
Sometimes people speak of pseudoparallelism in this context, to contrast it with
the true hardware parallelism of multiprocessor systems (which have two or more
CPUs sharing the same physical memory). Keeping track of multiple, parallel
activities is hard for people to do. Therefore, operating system designers over the
years have evolved a conceptual model (sequential processes) that makes
parallelism easier to deal with.

Definitions
The term process was used by the designer of the MULTICS system in 1960. A
program in execution. The animated spirit of a procedure. Entity to which
processer is assigned. It is a dispatchable unit.

PROCESSES
There is no universally agreed-upon definition, however, some of the definitions of
a process include;
A program in Execution.
An asynchronous activity.
The 'animated sprit' of a procedure in execution.
The entity to which processors are assigned.
The 'dispatchable' unit.
The definition "Program in Execution” seems to be most frequently used.

Operation that can be performed on process include

 Create a process
 Suspend a process
 Resume a process
 Block a process
 Send a signal/message
 Communicate with one another

The Process Model


In this model, all the runnable software on the computer, sometimes including the
operating system, is organized into a number of sequential processes, or just
processes for short. A process is just an instance of an executing program,
including the current values of the program counter, registers, and variables.
Conceptually, each process has its own virtual CPU. In reality, of course, the real
CPU switches back and forth from process to process, but to understand the
system, it is much easier to think about a collection of processes running in
(pseudo) parallel than to try to keep track of how the CPU switches from program
to program. This rapid switching back and forth is called multiprogramming. In
Fig. 1(a) we see a computer multiprogramming four programs in memory. In Fig.
1(b) we see four processes, each with its own flow of control (i.e., its own logical
program counter), and each one running independently of the other ones. Of
course, there is only one physical program counter, so when each process runs, its
logical program counter is loaded into the real program counter. When it is
finished (for the time being), the physical program counter is saved in the process’
stored logical program counter in memory. In Fig. 1(c) we see that, viewed over a
long enough time interval, all the processes have made progress, but at any given
instant only one process is actually running.

Figure 2-1. (a) Multiprogramming four programs. (b) Conceptual model of four
independent, sequential processes. (c) Only one program is active at once.
With the CPU switching back and forth among the processes, the rate at which a
process performs its computation will not be uniform and probably not even
reproducible if the same processes are run again. Thus, processes must not be
programmed with built-in assumptions about timing.
The difference between a process and a program is subtle, but absolutely crucial.
An analogy may help you here. Consider a culinary-minded computer scientist
who is baking a birthday cake for his young daughter. He has a birthday cake
recipe and a kitchen well stocked with all the input: flour, eggs, sugar, extract of
vanilla, and so on. In this analogy, the recipe is the program, that is, an algorithm
expressed in some suitable notation, the computer scientist is the processor (CPU),
and the cake ingredients are the input data. The process is the activity consisting of
our baker reading the recipe, fetching the ingredients, and baking the cake.

A process is the unit of work in a system. In Process model, all software on the
computer is organized into a number of sequential processes. A process includes
PC, registers, and variables.
Conceptually, each process has its own virtual CPU. In reality, the CPU switches
back and forth among processes. (The rapid switching back and forth is called
multiprogramming).

Process Creation
Operating systems need some way to create processes. In very simple systems, or
in systems designed for running only a single application (e.g., the controller in a
microwave oven), it may be possible to have all the processes that will ever be
needed be present when the system comes up. In general-purpose systems,
however, some way is needed to create and terminate processes as needed during
operation. We will now look at some of the issues.
Four principal events cause processes to be created:
1. System initialization.
2. Execution of a process-creation system call by a running process.
3. A user request to create a new process.
4. Initiation of a batch job.

A process may create several new processes, via a system call, during the course of
execution, creating process is called parent process and the created one is called the child
processes. Only one parent is needed to create a child process. Note that unlike plants and
animals that use sexual representation, a process has only one parent. This creation of
process (processes) yields a hierarchical structure of processes like one in the figure. Notice
that each child has only one parent but each parent may have many children. After the
creation, the two processes, the parent and the child, have the same memory image, the
same environment strings and the same open files. After a process is created, both the parent
and child have their own distinct address space. If either process changes a word in its
address space, the change is not visible to the other process.

In process creation, parents process create children processes, which in turn can create other
processes forming a tree of processes. Resources are shared among parents and children and
parents and children execute jobs concurrently.

Reasons for process creation

• User logs on
• User starts a program
• OS creates process to provide a service (e.g., printer daemon to manage printer)
• Program starts another process (e.g., netscape calls xv to display a picture)

Relationship between process and program


It is same beast with different name or when this beast is sleeping (not executing) it is called
program and when it is executing, it becomes process.

A Computer Program is a passive collection of instructions while a process is the actual


execution of those instructions.

Process is not the same as program. A process is more than a program code. A process is an
'active' entity as oppose to program which is consider to be a 'passive' entity. As we all
know that a program is an algorithm expressed in some suitable notation, (e.g.,
programming language). Being passive, a program is only a part of process.

Process Termination

After a process has been created, it starts running and does whatever its job is.
However, nothing lasts forever, not even processes. Sooner or later the new
process will terminate, usually due to one of the following conditions:
1. Normal exit (voluntary).
2. Error exit (voluntary).
3. Fatal error (involuntary).
4. Killed by another process (involuntary).
Most processes terminate because they have done their work. When a compiler has
compiled the program given to it, the compiler executes a system call to tell the
operating system that it is finished.
A process terminates when it finishes executing its last statement. Its resources are
returned to the system, it is purged from any system lists or tables, and its process
control block (PCB) is erased i.e., the PCB's memory space is returned to a free
memory pool.

Process Hierarchies
In some systems, when a process creates another process, the parent process and
child process continue to be associated in certain ways. The child process can itself
create more processes, forming a process hierarchy. Note that unlike plants and
animals that use sexual reproduction, a process has only one parent (but zero, one,
two, or more children). So a process is more like a hydra than like, say, a cow.
In UNIX, a process and all of its children and further descendants together form a
process group. When a user sends a signal from the keyboard, the signal is
delivered to all members of the process group currently associated with the
keyboard (usually all active processes that were created in the current window).
Individually, each process can catch the signal, ignore the signal, or take the
default action, which is to be killed by the signal.

Process State Model


As a process executes, it changes state. The state of a process is defined in part by the
current activity of that process. Each process may be in one of the following states:
• New State: The process being created.
• Running State: A process is said to be running if it has the CPU, that is, process is
actually using the CPU at that particular instant.
• Blocked (or waiting) State: A process is said to be blocked if it is waiting for some event
to happen such that as an I/O completion before it can proceed. Note that a process is unable
to run until some external event happens.
• Ready State: A process is said to be ready if it is waiting to be assigned to a processor.
• Terminated state: The process has finished execution.

Logically, the 'Running' and 'Ready' states are similar. In both cases the process is willing to
run, only in the case of 'Ready' state, there is temporarily no CPU available for it. The
'Blocked' state is different from the 'Running' and 'Ready' states in that the process cannot
run, even if the CPU is available
Figure 3.1: Diagram of process states.

Process State Transitions


Following are six (6) possible transitions among the above mentioned five (5) states
• Transition 1 occurs when process discovers that it cannot continue. If running
process initiates an I/O operation before its allotted time expires, the running
process voluntarily relinquishes the CPU .

This state transition is:

Block (process-name): Running → Block.


• Transition 2 occurs when the scheduler decides that the running process has
run long enough and it is time to let another process have CPU time. Process
has used up its current time slice.

This state transition is:

Time-Run-Out (process-name): Running → Ready.

 Transition 3 occurs when all other processes have had their share and it is
time for the first process to run again. CPU scheduler chooses that process to
execute next, according to some scheduling algorithm

This state transition is:

Dispatch (process-name): Ready → Running.


 Transition 4 occurs when the external event for which a process was waiting
(such as the arrival of input) happens.

This state transition is:

Wakeup (process-name): Blocked → Ready.

 Transition 5 occurs when the process is created.

This state transition is:

Admitted (process-name): New → Ready.

 Transition 6 occurs when the process has finished execution.

This state transition is:

Exit (process-name): Running → Terminated.

Process State
The process state consist of everything necessary to resume the process execution if it is
somehow put aside temporarily
• Code for the program
• Program’s static and dynamic data
• Program’s procedure call stack
• Contents of general purpose registers
• Contents of Program Counter (PC)
—address of next instruction to be executed
• Contents of Stack Pointer (SP)
• Contents of Program Status Word (PSW)
- interrupt status, condition codes, etc.
• OS resources in use (e.g., memory, open files, connections to other programs)
• Accounting information

Process Control Block


For every process, the OS maintains a Process Control Block (PCB), a data structure that
represents the process and its state:
• Process id number
• User id of owner
• Memory space (static, dynamic)
• Program Counter( the address of the next instruction to be executed for this
process)
• CPU registers (general purpose register, stack pointers, index registers and
accumulators)
• Process state (running, not-running, etc.)
• CPU scheduling information (e.g., priority)
• List of open files
• Accounting information(the amount of CPU and real time used, time limits, job
or process numbers, account numbers)
• Memory Management information(the value of base and limit registers, the
page tables, or the segment tables depending on the memory system used by the
operating system)
• I/O states, I/O in progress
• Pointers into CPU scheduler’s state queues (e.g., the waiting queue)
Figure 3.2: Process Control Block (PCB)

PROCESSING MODES
There are various processing modes, two of which are majorly connected with the
study of operating systems, they are;
Batch Processing: The earliest computers were extremely expensive devices, and
very slow. Machines were typically dedicated to a particular set of tasks and
operated by control panel, the operator manually entering small programs via
switches in order to load and run other programs.
These programs might take hours, even weeks, to run. As computers grew in
speed, run times dropped, and suddenly the time taken to start up the next program
became a concern. The batch processing methodologies evolved to decrease these
dead times, cuing up programs so as soon as one completed the next would start.
To support a batch processing operation, a number of card punch or paper tape
writers would be used by programmers, who would use these inexpensive
machines to write their programs "offline". When they completed typing them,
they were submitted to the operations team, who would schedule them for running.
Important programs would be run quickly, less important ones were unpredictable.
When the program was finally run, the output, generally printed, would be returned
to the programmer. The complete process might take days, during which the
programmer might never see the computer.

The alternative, allowing the user to operate the computer directly, was generally
far too expensive to consider. This was because the user had long delays where
they were simply sitting there entering code. This limited developments in direct
interactivity to organizations that could afford to waste computing cycles, large
universities for the most part. Programmers at the universities decried the
inhumanist behaviors that batch processing imposed, to the point that Stanford
students made a short film humorously critiquing it. They experimented with new
ways to directly interact with the computer, a field today known as human machine
interaction.

Benefits of Batch Processing


 It allows sharing of computer resources among many users and programs,
 It shifts the time of job processing to when the computing resources are less
busy,
 It avoids idling the computing resources with minute-by-minute human
interaction and supervision,
 By keeping high overall rate of utilization, it better amortizes the cost of a
computer, especially an expensive one.

Time Sharing: Time-sharing developed out of the realization that while any single
user was inefficient, a large group of users together were not. This was due to the
pattern of interaction; in most cases users entered bursts of information followed
by long pause, but a group of users working at the same time would mean that the
pauses of one user would be used up by the activity of the others. Given an optimal
group size, the overall process could be very efficient.

Similarly, small slices of time spent waiting for disk, tape, or network input could
be granted to other users. Implementing a system able to take advantage of this
would be difficult. Batch processing was really a methodological development on
top of the earliest systems; computers still ran single programs for single users at
any time, all that batch processing changed was the time delay between one
program and the next. Developing a system that supported multiple users at the
same time was a completely different concept, the "state" of each user and their
programs would have to be kept in the machine, and then switch between them
quickly. This would take up computer cycles, and on the slow machines of the era
this was a concern. However, as computers rapidly improved in speed, and
especially size of core memory to keep the state, the overhead of time-sharing
continually reduced in overall terms.

THREADS
In traditional operating systems, each process has an address space and a single
thread of control. In fact, that is almost the definition of a process. Nevertheless, in
many situations, it is desirable to have multiple threads of control in the same
address space running in quasi-parallel, as though they were (almost) separate
processes (except for the shared address space). In the following sections we will
discuss these situations and their implications.
Thread Usage
Why would anyone want to have a kind of process within a process? It turns out
there are several reasons for having these miniprocesses, called threads. Let us
now examine some of them.
The main reason for having threads is that in many applications, multiple activities
are going on at once. Some of these may be blocked from time to time. By
decomposing such an application into multiple sequential threads that run quasi-
parallel, the programming model becomes simpler.
A second argument for having threads is that since they are lighter weight than
processes, they are easier (i.e., faster) to create and destroy than processes. In many
systems, creating a thread goes 10–100 times faster than creating a process. When
the number of threads needed changes dynamically and rapidly, this property is
useful to have.
A third reason for having threads is also a performance argument. Threads yield no
performance gain when all of them are CPU bound, but when there is substantial
computing and also substantial I/O, having threads allows these activities to
overlap, thus speeding up the application.
Finally, threads are useful on systems with multiple CPUs, where real parallelism
is possible.

THREADS
A thread is a single sequence stream within in a process. Because threads have
some of the properties of processes, they are sometimes called lightweight
processes. In a process, threads allow multiple executions of streams. In many
respect, threads are popular way to improve application through parallelism. The
CPU switches rapidly back and forth among the threads giving illusion that the
threads are running in parallel. Like a traditional process i.e., process with one
thread, a thread can be in any of several states (Running, Blocked, Ready or
Terminated).
The following are some reasons why threads are used in the design of operating
systems.
1. A process with multiple threads makes a great server for example printer server.
2. Because threads can share common data, they do not need to use interprocess
communication.
3. Because of the very nature, threads can take advantage of multiprocessors.
PROCESSES VS. THREADS
In many respect threads operate in the same way as that of processes. Some of the
similarities and differences are:
Similarities
 Like processes threads share CPU and only one thread active (running) at a time.
 Like processes, threads within a process executes sequentially.
 Like processes, thread can create children.
 And like process, if one thread is blocked, another thread can run.
Differences
 Unlike processes, threads are not independent of one another.
 Unlike processes, all threads can access every address in the task.
 Unlike processes, threads are designed to assist one other. Processes might or
might not assist one another because processes may originate from different users.

You might also like