Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

Operating Systems 1

CS 241
Spring 2021

By
Marwa M. A. Elfattah

Main Reference
Operating System Concepts, Abraham Silbrschatz,
10th Edition
Threads &
Concurrency
Concurrency vs. Parallelism
 Parallelism implies a system can perform more
than one task simultaneously
 Concurrency supports more than one task making
progress
• On a single processor, CPU schedulers were
designed to provide the illusion of parallelism by
rapidly switching between processes, thereby
allowing each process to make progress. Such
processes were running concurrently, but not in
parallel.
 Thus, it is possible to have concurrency without
parallelism.
Concurrency vs. Parallelism
 Concurrent execution on single-core system:

 Parallelism on a multi-core system:


Multicore Programming
 Multicore or multiprocessor systems putting
pressure on programmers, challenges include:
• Dividing independent activities  thus can run in
parallel

• Data dependency  When one task depends on data


from another, programmers must ensure that the execution of
the tasks is synchronized

• Balance  tasks perform nearly equal work


• Data splitting  to run different tasks in different cores
• Testing and debugging  When a program is
running in parallel on multiple cores, many different execution
paths are possible. Testing and debugging such concurrent
programs is inherently more difficult.
Multicore Programming - Type of parallelism
 Data parallelism – distributes subsets of the same
data across multiple cores, same operation on each

 Task parallelism – distributing threads across


cores, each thread performing unique operation
Amdahl’s Law
 Identifies performance gains from adding additional
cores to an application that has both serial and
parallel components
 IF: S is serial portion, and N processing cores

𝟏
𝑺𝒑𝒆𝒆𝒅 𝒖𝒑 ≤
(𝟏 − 𝒔)
𝑺+
𝑵
Amdahl’s Law
 IF: S is serial portion, and N processing cores
𝟏
𝑺𝒑𝒆𝒆𝒅 𝒖𝒑 ≤
(𝟏 − 𝒔)
𝑺+
𝑵

 That is, if application is 75% parallel and 25% serial,


• Having 2 cores results in speedup of
𝟏
(𝟎.𝟕𝟓) = 1.6 times
𝟎.𝟐𝟓+ 𝟐

• Having 3 cores results in speedup of


𝟏
(𝟎.𝟕𝟓) = 2 times
𝟎.𝟐𝟓+ 𝟐
Amdahl’s Law
 IF: S is serial portion, and N processing cores
𝟏
𝑺𝒑𝒆𝒆𝒅 𝒖𝒑 ≤
(𝟏 − 𝒔)
𝑺+
𝑵
 As N approaches infinity, speedup approaches 1 / S

Serial portion of an application has


disproportionate effect on performance gained
by adding additional cores
Threads - Overview
 Previously, it was assumed that a process was an
executing program with a single thread.
 However, modern OSs, provide features enabling a
process to contain multiple threads of control.
• Most modern applications are multithreaded
Threads run within application
Threads - Overview
 A process can have multiple threads of control,
 To perform more than one task at a time.
 Each thread belongs to exactly one process and no
thread can exist outside a process.
• A thread in process A cannot reference a thread
in process B.
 Threads are lightweight processes as:
• They have their own thread ID, a program
counter (PC), a register set, and stack
• But, they share with other threads in the same
process code, data, heap sections, and other OS
resources, such as open files, permissions…
Threads - Overview
Threads - Overview
 Because threads share the same address space,
• The operational cost of communication between
the threads is low, which is an advantage.
• The disadvantage is that a problem with one
thread in a process will certainly affect other
threads and the viability of the process itself.
Threads switching
 Thread switching is a type of context switching from
one thread to another thread in the same process.
 It is very efficient and much cheaper because
• It involves switching out only identities and
resources such as the program counter,
registers and stack pointers.
• While, processes switching involves switching
of all the process resources.
Such as, memory addresses, page tables, and
kernel resources, caches in the processor.
Motivation
 Applications can be designed to utilize processing
capabilities on multicore systems.
• can perform several CPU-intensive tasks in
parallel across the multiple computing cores.
 A single application may be required to perform
several tasks.
• a web server accepts thousands of clients
concurrently. If the web server ran as a single-
threaded process, it would be able to service only
one client at a time.
 Creating a new process is heavy.
Multithreaded Server Architecture
Examples Of Multithreaded Applications
 An application that creates photo thumbnails from a
collection of images may use a separate thread to
generate a thumbnail from each separate image.
 A web browser might have one thread display
images or text while another thread retrieves data
from the network.
 A word processor may have a thread for displaying
graphics, another thread for responding to
keystrokes from the user, and a third thread for
performing spelling and grammar checking in the
background.
In General
 The program starts out as a text file of code,
 The program is compiled or interpreted into binary,
 The program is loaded into memory,
 The program becomes one or more running
processes.
 Processes are typically independent of each other,
 While threads exist as the subset of a process.
 Threads can communicate with each other, and can
be switched more easily than processes can,
 But threads are more vulnerable to problems caused
by other threads in the same process.
Benefits
 Responsiveness – may allow continued execution
if part of process is blocked, especially important for
user interfaces
 Resource Sharing – threads share resources of
process, easier than shared memory or message
passing
 Economy – cheaper than process creation, thread
switching lower overhead than context switching
 Scalability – process can take advantage of
multicore architectures
Types of Threads
 User Level Threads − User managed threads.

 Kernel Level Threads − Operating System


managed threads acting on kernel.
Types of Threads
 User Level Threads − management done by user.
 Kernel Level Threads − Operating System
managed threads acting on kernel.
• Supported by almost all general - purpose
operating systems, including: Windows, Linux,
Mac OS X, iOS, Android
User Threads
 User-level threads are small and much faster than
kernel level threads.
• The application starts with a single thread.
• The thread library contains APIs for creating and
destroying threads, for passing message and
data between threads, for scheduling thread
execution and for saving and restoring thread
contexts.
User Threads
 Kernel is not aware of the existence of these
threads. It handles them as if they were single-
threaded processes.
• There is no kernel involvement in
synchronization for user-level threads.
• There are no kernel mode privileges required for
thread switching.
• Cannot use multiprocessing to their advantage.
• The entire process is blocked if one user-level
thread performs blocking operation.
Kernel Threads
 The application has no direct control over these
threads
• The Kernel performs thread creation, scheduling,
switching and management in Kernel space.
requires a mode switch to the Kernel.
 Kernel threads are generally slower to create and
manage than the user threads.
 Kernel threads are strongly implementation-
dependent. To facilitate the writing of portable
programs, libraries provide user threads.
 Kernel routines themselves can be multithreaded.
Kernel Threads
 A kernel thread is the schedulable entity, which
means scheduling by the Kernel is done on a
thread basis.
• Kernel can simultaneously schedule multiple
threads from the same process on multiple
processes.
• If one thread in a process is blocked, the Kernel
can schedule another thread of the same
process.
• The context information is all managed by the
kernel  generally slower .
Multithreading Models
 a relationship must exist between user threads and
kernel threads.
• Operating systems provide a combined user
level thread and Kernel level thread facility.

 Multithreading models are three types:


• Many-to-One
• One-to-One
• Many-to-Many
Many-to-One
 Many user-level threads mapped to single kernel
thread
 Few systems currently use this model.
 Because of its inability to take advantage of
multiple processing cores, which have now
become standard on most computer systems.
• Ex: GNU Portable Threads
One-to-One
 Each user-level thread maps to kernel thread
 Creating a user-level thread requires creating a
kernel thread
 More concurrency than many-to-one
 Number of threads per process sometimes
restricted due to overhead
• The developer has to be careful not to create too
many threads
 Examples
• Windows
• Linux
Many-to-Many Model
 Allows many user level threads to be mapped to a
smaller or equal number of kernel threads.
 Allows the operating system to create a sufficient
number of kernel threads.
 An application may be allocated more kernel
threads on a system with eight processing cores
than a system with four cores
Many-to-Many Model
 Two-level model is a variation of the many-to-
many model, except that it allows a user thread to
be bound to kernel thread
 Although the many-to-many model appears to be
the most flexible, it is difficult to implement.
• one-to-one model is more commonly used
Thread Libraries
 Thread library provides programmer with API for
creating and managing threads
 Two primary ways of implementing
• User-level library in user space
• Kernel-level library supported by the OS
Thread Libraries
 Three main thread libraries are in use today:
• POSIX Pthreads: provided as either a user-level
or a kernel-level library.
• Windows: a kernel-level library
• Java: Java thread API is generally implemented
using a thread library available on the host
system.
 Any Java program – even consisting of only a
main() - comprise at least a single thread in the
JVM. Java threads are available on any system
that provides a JVM
Implicit Threading
 Growing in popularity as numbers of threads
increase, program correctness more difficult with
explicit threads
 Creation and management of threads done by
compilers and run-time libraries rather than
programmers
• The focus of the programmer is on writing the
algorithm rather than the multithreading.
Implicit Threading
 Growing in popularity as numbers of threads
increase, program correctness more difficult threads
 With implicit threading, creation and management of
threads done by compilers and run-time libraries
rather than programmers
• The focus of the programmer is on writing the
algorithm rather than the multithreading.
EX: #pragma omp parallel Create as many
threads as there are cores
Thank You

You might also like