Professional Documents
Culture Documents
Lec 4
Lec 4
Instructed By
Engr. M. Asif Shaikh
Lecture 4
Multiprocessing
Threads
1
Multiprocessing
Multiprocessing, in computing, is a mode of
operation in which two or more processors in
a computer simultaneously process two or
more different portions of the
same program (set of instructions).
2
A multiprocessing operating system (OS) is
one in which two or more central processing
units (CPUs) control the functions of the
computer.
3
Each CPU contains a copy of the OS, and
these copies communicate with one another
to coordinate operations.
4
The primary advantage of a multiprocessor
computer is speed, and thus the ability to
manage larger amounts of information.
5
MULTIPROCESSING VERSUS
SINGLE-
PROCESSOR OPERATING SYSTEMS
Multiprocessing operating systems (OSs)
6
7
The difference is that multiprocessing OSs
divide the work up into various subtasks and
then assign these subtasks to different
central processing units (CPUs).
8
9
Multiprocessing uses a distinct
communication architecture to accomplish
this.
10
Because multiprocessing OSs rely on parallel
processing, each processor involved in a task
must be able to inform the others about how
its task is progressing.
11
MULTITASKING
The advent of multiprocessing OSs has had a
major influence on how people perform their
work.
12
This enables computers to use more user-
friendly interfaces based on graphical
representations of input and output.
13
Multiprocessing OSs, though once a major
innovation, have become the norm rather
than the exception.
14
Without the advantages offered by multiple
processors and OSs tailored to take
advantage of them, computers would not be
able to keep up.
15
Thread
Modern multiprocessing operating systems
allow many processes to be active, where
each process is a “thread” of computation
being used to execute a program.
16
A thread is a basic unit of CPU utilization; it
comprises a thread ID, a program counter, a
register set, and a stack.
17
Three different types of models relate user and
kernel threads.
20
For example in a word processor, a
background thread may check spelling and
grammar while a foreground thread
processes user input ( keystrokes ), while yet
a third thread loads images from the hard
drive, and a fourth does periodic automatic
backups of the file being edited.
21
Benefits
There are four major categories of benefits to
multi-threading:
1. Responsiveness - One thread may provide
rapid response while other threads are
blocked or slowed down doing intensive
calculations.
2. Resource sharing - By default threads share
common code, data, and other resources,
which allows multiple tasks to be performed
simultaneously in a single address space.
22
3. Economy - Creating and managing threads
( and context switches between them ) is much
faster than performing the same tasks for
processes.
23
Process Synchronization
Process Synchronization is the task of
coordinating the execution of processes in a
way that no two processes can have access to
the same shared data and resources.
24
It occurs in an operating system among
cooperating processes.
25
It is specially needed in a multi-process
system when multiple processes are running
together, and more than one processes try to
gain access to the same shared resource or
data at the same time.
26
Why Process Synchronization is needed
27
Sections of a Program
28
3. Exit Section: Exit section allows the other
process that are waiting in the Entry Section,
to enter into the Critical Sections. It also
checks that a process that finished its
execution should be removed through this
Section.
29
What is Critical Section Problem
30
In the critical section, only a single process
can be executed.
31
Rules for Critical Section
32
2. Progress: This solution is used when no one is
in the critical section, and someone wants in.
Then those processes not in their reminder
section should decide who should go in, in a
finite time.
3. Bound Waiting: When a process makes a
request for getting into critical section, there is
a specific limit about number of processes can
get into their critical section. So, when the limit
is reached, the system must allow request to
the process to get into its critical section.
END OF LECTURE
33
Peterson’s Solution
Peterson’s solution is a classic software
based solution to the critical section problem.
34
Initially the flags are false.
When a process wants to execute it’s critical
35
36
Mutex Locks
Software based solutions such as Peterson’s
are not guaranteed to work on modern
computer architectures.
37
Instead, operating systems designers build
software tools to solve the critical-section
problem.
38
Mutex lock is to protect critical regions and
thus prevent race conditions.
39
Semaphores
Mutex locks, as it is mentioned earlier, are
generally considered the simplest of
synchronization tools.
40
A semaphore S is an integer variable that,
apart from initialization, is accessed only
through two standard atomic operations:
wait() and signal().
41