Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 41

Operating Systems

Instructed By
Engr. M. Asif Shaikh

Lecture 4
Multiprocessing
Threads

1
Multiprocessing
 Multiprocessing, in computing, is a mode of
operation in which two or more processors in
a computer simultaneously process two or
more different portions of the
same program (set of instructions).

 Multiprocessing is typically carried out by two


or more microprocessors or cores.

2
 A multiprocessing operating system (OS) is
one in which two or more central processing
units (CPUs) control the functions of the
computer.

3
 Each CPU contains a copy of the OS, and
these copies communicate with one another
to coordinate operations.

 The use of multiple processors allows the


computer to perform calculations faster,
since tasks can be divided up between
processors.

4
 The primary advantage of a multiprocessor
computer is speed, and thus the ability to
manage larger amounts of information.

 Because each processor in such a system is


assigned to perform a specific function, it can
perform its task, pass the instruction set on
to the next processor, and begin working on
a new set of instructions. 

5
MULTIPROCESSING VERSUS 
SINGLE-
PROCESSOR OPERATING SYSTEMS
 Multiprocessing operating systems (OSs)

perform the same functions as single-


processor OSs.

 They schedule and monitor operations and


calculations in order to complete user-
initiated tasks.

6
7
 The difference is that multiprocessing OSs
divide the work up into various subtasks and
then assign these subtasks to different
central processing units (CPUs). 

8
9
 Multiprocessing uses a distinct
communication architecture to accomplish
this.

 A multiprocessing OS needs a mechanism for


the processors to interact with one another as
they schedule tasks and coordinate their
completion. 

10
 Because multiprocessing OSs rely on parallel
processing, each processor involved in a task
must be able to inform the others about how
its task is progressing.

 This allows the work of the processors to be


integrated when the calculations are done
such that delays and other inefficiencies are
minimized.

11
MULTITASKING
 The advent of multiprocessing OSs has had a
major influence on how people perform their
work.

 Multiprocessing OSs can execute more than


one program at a time.

12
 This enables computers to use more user-
friendly interfaces based on graphical
representations of input and output.

 It allows users with relatively little training to


perform computing tasks that once were
highly complex.

 They can even perform many such tasks at


once.

13
 Multiprocessing OSs, though once a major
innovation, have become the norm rather
than the exception.

 As each generation of computers must run


more and more complex applications, the
processing workload becomes greater and
greater.

14
 Without the advantages offered by multiple
processors and OSs tailored to take
advantage of them, computers would not be
able to keep up.

15
Thread
 Modern multiprocessing operating systems
allow many processes to be active, where
each process is a “thread” of computation
being used to execute a program. 

 A thread of execution is the smallest


sequence of programmed instructions that
can be managed independently by
a scheduler, which is typically a part of
the operating system.

16
 A thread is a basic unit of CPU utilization; it
comprises a thread ID, a program counter, a
register set, and a stack.

 It shares with other threads belonging to the


same process its code section, data section,
and other operating-system resources, such
as open files and signals.

17
 Three different types of models relate user and
kernel threads.

 The many to one model maps many user threads


to a single kernel thread.

 The one-to-one model maps each user thread to


a corresponding kernel thread.

 The many-to many model multiplexes many user


threads to a smaller or equal number of kernel
threads.
18
 Most modern operating systems provide kernel
support for threads.

 These include Windows, Mac OS X, Linux, and


Solaris.

 Thread libraries provide the application


programmer with an API for creating and
managing threads.

 Three primary thread libraries are in common use:


POSIX Pthreads,Windows threads, and Java threads
19
Motivation

 Threads are very useful in modern


programming whenever a process has
multiple tasks to perform independently of
the others.

 This is particularly true when one of the tasks


may block, and it is desired to allow the other
tasks to proceed without blocking.

20
 For example in a word processor, a
background thread may check spelling and
grammar while a foreground thread
processes user input ( keystrokes ), while yet
a third thread loads images from the hard
drive, and a fourth does periodic automatic
backups of the file being edited.

21
Benefits
 There are four major categories of benefits to
multi-threading:
1. Responsiveness - One thread may provide
rapid response while other threads are
blocked or slowed down doing intensive
calculations.
2. Resource sharing - By default threads share
common code, data, and other resources,
which allows multiple tasks to be performed
simultaneously in a single address space.

22
3. Economy - Creating and managing threads
( and context switches between them ) is much
faster than performing the same tasks for
processes.

4. Scalability, i.e. Utilization of multiprocessor


architectures - A single threaded process can
only run on one CPU, no matter how many
may be available, whereas the execution of a
multi-threaded application may be split
amongst available processors.

23
Process Synchronization
 Process Synchronization is the task of
coordinating the execution of processes in a
way that no two processes can have access to
the same shared data and resources.

24
 It occurs in an operating system among
cooperating processes.

 While executing many concurrent processes,


process synchronization helps to maintain
shared data consistency and cooperating
process execution.

25
 It is specially needed in a multi-process
system when multiple processes are running
together, and more than one processes try to
gain access to the same shared resource or
data at the same time.

26
Why Process Synchronization is needed

 For Example, process A changing the data in


a memory location while another process B is
trying to read the data from
the same memory location.

 There is a high probability that data read by


the second process will be erroneous.

27
Sections of a Program

 There are four essential elements:

1. Entry Section: It is part of the process which


decides the entry of a particular process.

2. Critical Section: This part allows one


process to enter and modify the shared
variable.

28
3. Exit Section: Exit section allows the other
process that are waiting in the Entry Section,
to enter into the Critical Sections. It also
checks that a process that finished its
execution should be removed through this
Section.

4. Remainder Section: All other parts of the


Code, which is not in Critical, Entry, and Exit
Section, are known as the Remainder Section.

29
What is Critical Section Problem

 A critical section is a segment of code which


can be accessed by a single process at a
specific point of time.

 The section consists of shared data resources


that required to be accessed by other
processes.

30
 In the critical section, only a single process
can be executed.

 Other processes, waiting to execute their


critical section, need to wait until the current
process completes its execution.

31
Rules for Critical Section

 The critical section need to must enforce all three


rules:

1. Mutual Exclusion: Mutual Exclusion is a special


type of binary semaphore which is used for
controlling access to the shared resource. It
includes a priority inheritance mechanism to
avoid extended priority inversion problems. Not
more than one process can execute in its
critical section at one time.

32
2. Progress: This solution is used when no one is
in the critical section, and someone wants in.
Then those processes not in their reminder
section should decide who should go in, in a
finite time.
3. Bound Waiting: When a process makes a
request for getting into critical section, there is
a specific limit about number of processes can
get into their critical section. So, when the limit
is reached, the system must allow request to
the process to get into its critical section.
END OF LECTURE
33
Peterson’s Solution
 Peterson’s solution is a classic software
based solution to the critical section problem.

 Peterson’s Algorithm is used to synchronize


two processes.

 It uses two variables, a bool array flag of size


2 and an int variable turn to accomplish it.

34
 Initially the flags are false.
 When a process wants to execute it’s critical

section, it sets it’s flag to true and turn as the


index of the other process.
 This means that the process wants to execute

but it will allow the other process to run first.


 The process performs busy waiting until the

other process has finished it’s own critical


section.

35
36
Mutex Locks
 Software based solutions such as Peterson’s
are not guaranteed to work on modern
computer architectures.

 The hardware based solutions to the critical


section problem presented in book
are complicated as well as generally
inaccessible to application programmers.

37
 Instead, operating systems designers build
software tools to solve the critical-section
problem.

 The simplest of these tools is the mutex lock.

 In fact, the term mutex is short for mutual


exclusion.

38
 Mutex lock is to protect critical regions and
thus prevent race conditions.

 That is, a process must acquire the lock


before entering a critical section; it releases
the lock when it exits the critical section.

 The acquire()function acquires the lock, and


the release() function releases the lock,

39
Semaphores
 Mutex locks, as it is mentioned earlier, are
generally considered the simplest of
synchronization tools.

 Now we examine a more robust tool that can


behave similarly to a mutex lock but can also
provide more sophisticated ways for
processes to synchronize their activities.

40
 A semaphore S is an integer variable that,
apart from initialization, is accessed only
through two standard atomic operations:
wait() and signal().

41

You might also like