Slide-3-OS Process and Threads

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 51

Operating Systems

Processes and Threads


Process
• Process - a program in execution; process execution must
progress in sequential fashion
• A process includes:
– program counter
– stack
– data section

2
Process in Memory
§ A process has its own address space
consisting of
§ Text region
§ Stores the code that the processor executes
§ Data region
§ Contains global variables
§ Stack region
§ Stores local variables for active procedure
calls, function parameters, return addresses
§ Heap
§ Contains memory dynamically allocated
during run time

3
Process State
• As a process executes, it
changes state
– new: The process is being
created
– running: Instructions are
being executed
– waiting: The process is
waiting for some event to
occur
– ready: The process is
waiting to be assigned to a
processor
– terminated: The process
has finished execution
4
Process Control Block (PCB)
Information associated with each
process

• Process state
• Program counter
• CPU registers
• CPU scheduling information
• Memory-management information
• Accounting information
• I/O status information

5
Process Control Block (PCB)
Information associated with each process

• Pid: process identification number (PID) is assigned to each process.


• Process state: state of the process
• Program counter: stores the counter which contains the address of the
next instruction that is to be executed for the process
• CPU registers: CPU registers which includes: accumulator, base,
registers and general purpose registers
• CPU scheduling information
• Memory-management information
• Accounting information
• I/O status information

6
Process Control Block (PCB)
Information associated with each process

• Pointer: a stack pointer which is required to be saved when the process


is switched from one state to another to retain the current position of the
process
• Memory limits: contains the information about memory management
system used by operating system. This may include the page tables,
segment tables etc
• Open files list : includes the list of files opened for a process.

7
Context Switch
• When CPU switches to another process, the system must
save the state of the old process and load the saved state
for the new process via a context switch.
• Context of a process represented in the PCB
• Context-switch time is overhead; the system does no
useful work while switching
– The more complex the OS and the PCB → longer the
context switch
• Time dependent on hardware support
– Some hardware provides multiple sets of registers per
CPU → multiple contexts loaded at once
8
CPU Switch From Process to Process

9
Process Scheduling

• Maximize CPU use, quickly switch processes onto CPU for


time sharing
• Process scheduler selects among available processes for next
execution on CPU
• Maintains scheduling queues of processes
– Job queue – set of all processes in the system
– Ready queue – set of all processes residing in main memory,
ready and waiting to execute
– Device queues – set of processes waiting for an I/O device
– Processes migrate among the various queues

10
Ready Queue And Various
I/O Device Queues

11
Representation of Process Scheduling

12
Schedulers
• Long-term scheduler (or job scheduler) – selects which
processes should be brought into the ready queue
• Short-term scheduler (or CPU scheduler) – selects which
process should be executed next and allocates CPU
– Sometimes the only scheduler in a system

13
Schedulers (Cont.)
• Short-term scheduler is invoked very frequently
(milliseconds)  (must be fast)
• Long-term scheduler is invoked very infrequently (seconds,
minutes)  (may be slow)
• The long-term scheduler controls the degree of
multiprogramming
• Processes can be described as either:
– I/O-bound process – spends more time doing I/O than
computations, many short CPU bursts
– CPU-bound process – spends more time doing computations;
few very long CPU bursts
14
Addition of Medium Term Scheduling

• A computer has sufficient amount of physical memory


• But most of times more memory is required, so swap some memory to disk
• Swap space is a space on hard disk which is a substitute of physical memory
• It is used as virtual memory which contains process memory image
• Whenever the computer run short of physical memory it uses its virtual
memory and stores information in memory on disk
• Swap space helps the computer’s operating system in pretending that it have
more RAM than it actually has
• This interchange of data between virtual memory and real memory is called as
15 swapping and space on disk as “swap space”
Interprocess Communication

• Processes within a system may be independent or cooperating


• Cooperating process can affect or be affected by other
processes, including sharing data
• Reasons for cooperating processes:
– Information sharing
– Computation speedup
– Modularity
– Convenience
• Cooperating processes need interprocess communication (IPC)
• Two models of IPC
– Shared memory
– Message passing

16
Communications Models

17
Cooperating Processes
• Independent process cannot affect or be affected by the
execution of another process
• Cooperating process can affect or be affected by the execution
of another process
• Advantages of process cooperation
– Information sharing: Several processes may need to access the
same data (such as stored in a file)
– Computation speed-up: process run faster if it is broken into
subtasks and distributed among different processors or I/O
– Modularity: dividing the system function into separate
processes or thread
– Convenience: Concurrent execution requires communication with
each other and synchronizing their actions
18
Producer-Consumer Problem
• Paradigm for cooperating processes, producer process
produces information that is consumed by a consumer
process
– unbounded-buffer places no practical limit on the size of the
buffer
– bounded-buffer assumes that there is a fixed buffer size

19
Bounded-Buffer –
Shared-Memory Solution
• Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;

item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;

• Solution is correct, but can only use BUFFER_SIZE-1


elements

20
Bounded-Buffer – Producer

while (true) {
/* Produce an item */
while ((in = (in + 1) % BUFFER SIZE ) == out)
; /* do nothing -- no free buffers */
buffer[in] = item;
in = (in + 1) % BUFFER SIZE;
}

21
Bounded Buffer – Consumer
while (true) {
while (in == out)
; // do nothing -- nothing to consume

// remove an item from the buffer


item = buffer[out];
out = (out + 1) % BUFFER SIZE;
return item;
}

22
Interprocess Communication –
Message Passing
• Mechanism for processes to communicate and to synchronize
their actions
• Message system – processes communicate with each other
without resorting to shared variables
• IPC facility provides two operations:
– send(message) – message size fixed or variable
– receive(message)
• If P and Q wish to communicate, they need to:
– establish a communication link between them
– exchange messages via send/receive
• Implementation of communication link
– physical (e.g., shared memory, hardware bus)
– logical (e.g., logical properties, pipe())
23
Threads
• A thread is a path/flow of execution within a process, with its own
– program counter,
– system registers and
– stack.
• Threads have same properties as of the process so they are called as
light weight processes.
• Threads provide a way to improve application performance through
parallelism
• Threads represent a software approach to improve performance of
operating system by reducing the overhead
• For example,
• in a browser, multiple tabs can be different threads.
• MS Word uses multiple threads: one thread to format the text, another
24 thread to process inputs, etc.
Threads (Contd…)
• Each thread belongs to exactly one process
• No thread can exist outside a process
• Each thread represents a separate flow of control
• Threads have been successfully used in implementing network
servers and web server
• Provide a suitable foundation for parallel execution of
applications on shared memory multiprocessors.

25
Single and Multithreaded Process

26
Difference between Process and Thread

Process is heavy weight or Thread is light weight taking


resource intensive. lesser resources than a
process.
Process switching needs Thread switching does not
interaction with operating need to interact with
system. operating system.
In multiple processing All threads can share same
environments each process set of open files, child
executes the same code but processes.
has its own memory and file
resources.

27
Difference between Process and Thread

If a process gets blocked, If a user level thread gets


remaining processes can blocked, all of its peer
continue execution threads also get blocked.

Multiple processes without Multiple threaded processes


using threads use more use fewer resources.
resources.
In multiple processes each One thread can read, write or
process operates change another thread's data.
independently of the others.

28
Advantages of Thread
• Thread minimize context switching time
• Use of threads provides concurrency within a process
• Efficient communication
• Economy- It is more economical to create and context
switch threads
• Utilization of multiprocessor architectures to a greater
scale and efficiency

29
Types of Thread
• Threads are implemented in following two ways
• User Level Threads -- User managed threads
• Kernel Level Threads -- Operating System managed
threads acting on kernel, an operating system core.

30
User Level Threads
• In this case, application manages thread management
kernel is not aware of the existence of threads.
• The thread library contains code for creating and
destroying threads, for passing message and data between
threads, for scheduling thread execution and for saving
and restoring thread contexts.
• The application begins with a single thread and begins
running in that thread.

31
User Level Threads
• Advantages
– Thread switching does not require Kernel mode privileges.
– User level thread can run on any operating system.
– Scheduling can be application specific in the user level
thread.
– User level threads are fast to create and manage.
• Disadvantages
– In a typical operating system, most system calls are
blocking.
– Multithreaded application cannot take advantage of
multiprocessing.

32
Kernel Level Threads
• In this case, thread management done by the Kernel. There is
no thread management code in the application area. Kernel
threads are supported directly by the operating system. Any
application can be programmed to be multithreaded. All of the
threads within an application are supported within a single
process.
• The Kernel maintains context information for the process as a
whole and for individuals threads within the process.
Scheduling by the Kernel is done on a thread basis. The
Kernel performs thread creation, scheduling and management
in Kernel space. Kernel threads are generally slower to create
and manage than the user threads.

33
Kernel Level Threads
• Advantages
– Kernel can simultaneously schedule multiple threads from the
same process on multiple processes.
– If one thread in a process is blocked, the Kernel can schedule
another thread of the same process.
– Kernel routines themselves can multithreaded.
• Disadvantages
– Kernel threads are generally slower to create and manage than
the user threads.
– Transfer of control from one thread to another within same
process requires a mode switch to the Kernel.

34
Multithreading Models
• Some operating system provide a combined user level thread
and Kernel level thread facility. Solaris is a good example of
this combined approach.
• In a combined system, multiple threads within the same
application can run in parallel on multiple processors and a
blocking system call need not block the entire process.
• Multithreading models are three types
• Many to many relationship.
• Many to one relationship.
• One to one relationship.

35
Multithreading Models
• Many-to-One: map many user-level threads to one kernel
thread

• One-to-One: map each user-level thread to a kernel thread

• Many-to-Many: multiplexes many user-level threads to a


smaller or equal number of kernel theads

36
Many-to-One Model
• Many to one model maps many user level threads to one
Kernel level thread
• Thread management is done in user space
• When thread makes a blocking system call, the entire process
will be blocked
• Only one thread can access the Kernel at a time, so multiple
threads are unable to run in parallel on multiprocessors.
• If the user level thread libraries are implemented in the
operating system in such a way that system does not support
them then Kernel threads use the many to one relationship
modes.
37
Many-to-One Model

n Many user-level threads


mapped to single kernel
thread
n Examples:
l Solaris Green Threads
l GNU Portable Threads

38
One-to-One
• Each user-level thread maps to kernel thread
• Examples
– Windows NT/XP/2000
– Linux
– Solaris 9 and later

39
One-to-One
• There is one to one relationship of user level thread to the
kernel level thread
• This model provides more concurrency than the many to one
model
• It also another thread to run when a thread makes a blocking
system call
• It support multiple thread to execute in parallel on
microprocessors
• Disadvantage of this model is that creating user thread
requires the corresponding Kernel thread
• OS/2, windows NT and windows 2000 use one to one
relationship model.

40
Many-to-Many Model
• Allows many user level threads
to be mapped to many kernel
threads
• Allows the operating system to
create a sufficient number of
kernel threads
• Solaris prior to version 9
• Windows NT/2000 with the
ThreadFiber package

41
Many-to-Many Model
• In this model, many user level threads multiplexes to the
Kernel thread of smaller or equal numbers
• The number of Kernel threads may be specific to either a
particular application or a particular machine.
• In this model, developers can create as many user threads
as necessary and the corresponding Kernel threads can
run in parallels on a multiprocessor.

42
Two-level Model
• Similar to M:M, except
that it allows a user thread
to be bound to kernel
thread
• Examples
– IRIX
– HP-UX
– Tru64 UNIX
– Solaris 8 and earlier

43
Difference between User Level &
Kernel Level Thread

User level threads are faster to Kernel level threads are


create and manage. slower to create and manage.
Implementation is by a thread Operating system supports
library at the user level. creation of Kernel threads.
User level thread is generic Kernel level thread is specific
and can run on any operating to the operating system.
system.
Multi-threaded application Kernel routines themselves
cannot take advantage of can be multithreaded.
multiprocessing.

44
User-Level Threads
• Thread management done by user-level threads library

• Examples
– POSIX Pthreads
– Mach C-threads
– Solaris threads
– Java threads

45
User-Level Threads
• Thread library entirely executed in user mode
• Cheap to manage threads
– Create: setup a stack
– Destroy: free up memory
• Context switch requires few instructions
– Just save CPU registers
– Done based on program logic
• A blocking system call blocks all peer threads

46
Kernel-Level Threads
• Kernel is aware of and schedules threads
• A blocking system call, will not block all peer threads

• Expensive to manage threads


• Expensive context switch
• Kernel Intervention

47
Kernel Threads
• Supported by the Kernel

• Examples: newer versions of


– Windows
– UNIX
– Linux

48
Linux Threads
• Linux refers to them as tasks rather than threads.
• Thread creation is done through clone() system call.
• Unlike fork(), clone() allows a child task to share the
address space of the parent task (process)

49
Pthreads
• A POSIX standard (IEEE 1003.1c) API for thread creation
and synchronization.
• API specifies behavior of the thread library, implementation
is up to development of the library.
• POSIX Pthreads - may be provided as either a user or kernel
library, as an extension to the POSIX standard.
• Common in UNIX operating systems.

50
LWP Advantages
• Cheap user-level thread management
• A blocking system call will not suspend the whole process
• LWPs are transparent to the application
• LWPs can be easily mapped to different CPUs

51

You might also like