Download as pdf or txt
Download as pdf or txt
You are on page 1of 98

Operating Systems

By
Prof. Sagar D. Korde
Department of Information Technology
K J Somaiya College of Engineering, Mumbai-77
(Constituent college of Somaiya Vidyavihar University)

9/3/2020 Silberschatz A., Galvin P., Gagne G, “Operating Systems Concepts”, VIIIth Edition, Wiley, 2011. 1
Course Outcomes:
At the end of successful completion of the module a student will be able to
CO2: Demonstrate use of inter process communication.
Module 2: Process Management
2.1 Processes: Process Concept, process creation , suspension and termination ,Process States: 2, 5, 7
state models, Process Description, Process Control block.
2.2 Threads: Multithreading models, Thread implementations – user level and kernel level threads,
Symmetric Multiprocessing.
2.3 Uniprocessor Scheduling: Scheduling Criteria, Types of Scheduling: Preemptive, Non-preemptive,
Long-term, Medium- term, Short-term schedulers. Scheduling Algorithms: FCFS, SJF, SRTF,RR, Priority.
2.4 Multiprocessor Scheduling: Granularity, Design Issues, Process Scheduling. Thread Scheduling,
Real Time Scheduling
2.5 Process Security

9/3/2020 Silberschatz A., Galvin P., Gagne G, “Operating Systems Concepts”, VIIIth Edition, Wiley, 2011. 2
2.1 Processes: Outline

• Process Concept
• Process Scheduling
• Operations on Processes
• Interprocess Communication
• Examples of IPC Systems
• Communication in Client-Server Systems

9/3/2020 3
Process Concept
• An operating system executes a variety of programs:
o Batch system – jobs
o Time-shared systems – user programs or tasks

• Textbooks uses the terms job and process almost interchangeably

• Process – a program in execution; process execution must progress in


sequential fashion

• A process includes:
o program counter
o stack
o data section

9/3/2020 4
The Process
• Multiple parts
o The program code, also called text section
o Current activity including program counter, processor registers
o Stack containing temporary data
−Function parameters, return addresses, local variables
o Data section containing global variables
o Heap containing memory dynamically allocated during run time
• Program is passive entity, process is active
o Program becomes process when executable file loaded into memory
• Execution of program started via GUI mouse clicks, command line entry of its name, etc
• One program can be several processes
o Consider multiple users executing the same program

9/3/2020 5
Process in Memory

9/3/2020 6
Process State
• As a process executes, it changes state
o new: The process is being created
o running: Instructions are being executed
o waiting: The process is waiting for some event to occur
o ready: The process is waiting to be assigned to a processor
o terminated: The process has finished execution

9/3/2020 7
Diagram of Process State

9/3/2020 8
Process Control Block (PCB)
Information associated with each process
• Process state
• Program counter
• CPU registers
• CPU scheduling information
• Memory-management information
• Accounting information
• I/O status information

9/3/2020 9
Process Control Block (PCB)

9/3/2020 10
CPU Switch From Process to Process

9/3/2020 11
Process Scheduling

• Maximize CPU use, quickly switch processes onto CPU for time sharing
• Process scheduler selects among available processes for next execution on
CPU
• Maintains scheduling queues of processes
o Job queue – set of all processes in the system
o Ready queue – set of all processes residing in main memory, ready
and waiting to execute
o Device queues – set of processes waiting for an I/O device
o Processes migrate among the various queues

9/3/2020 12
Process Representation in Linux

• Represented by the C structure task_struct


pid t pid; /* process identifier */
long state; /* state of the process */
unsigned int time slice /* scheduling information
*/ struct task struct *parent; /* this process’s
parent */ struct list head children; /* this
process’s children */ struct files struct *files;
/* list of open files */ struct mm struct *mm; /*
address space of this pro */

9/3/2020 13
Ready Queue And Various
I/O Device Queues

9/3/2020 14
Representation of Process Scheduling

9/3/2020 15
Schedulers

• Long-term scheduler (or job scheduler) – selects which


processes should be brought into the ready queue
• Short-term scheduler (or CPU scheduler) – selects which
process should be executed next and allocates CPU
o Sometimes the only scheduler in a system

9/3/2020 16
Schedulers (Cont.)

• Short-term scheduler is invoked very frequently (milliseconds) 


(must be fast)
• Long-term scheduler is invoked very infrequently (seconds,
minutes)  (may be slow)
• The long-term scheduler controls the degree of multiprogramming
• Processes can be described as either:
o I/O-bound process – spends more time doing I/O than
computations, many short CPU bursts
o CPU-bound process – spends more time doing computations;
few very long CPU bursts

9/3/2020 17
Addition of Medium Term Scheduling

9/3/2020 18
Context Switch
• When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process via a context switch.
• Context of a process represented in the PCB
• Context-switch time is overhead; the system does no useful work while
switching
o The more complex the OS and the PCB -> longer the context switch
• Time dependent on hardware support
o Some hardware provides multiple sets of registers per CPU -> multiple
contexts loaded at once

9/3/2020 19
Process Creation
• Parent process create children processes, which, in turn create other processes,
forming a tree of processes
• Generally, process identified and managed via a process identifier (pid)
• Resource sharing
o Parent and children share all resources
o Children share subset of parent’s resources
o Parent and child share no resources
• Execution
o Parent and children execute concurrently
o Parent waits until children terminate

9/3/2020 20
Process Creation (Cont.)
• Address space
o Child duplicate of parent
o Child has a program loaded into it
• UNIX examples
o fork system call creates new process
o exec system call used after a fork to replace the process’ memory
space with a new program

9/3/2020 21
Process Creation

9/3/2020 22
C Program Forking Separate Process
#include <stdio.h> #include <stdio.h> #include <stdio.h>
#include <sys/types.h> #include <sys/types.h> #include <ctype.h>
#include <unistd.h> int main() #include <limits.h>
int main() { #include <string.h>
{ fork(); #include <stdlib.h>
fork(); #include <unistd.h>
// make two process which run same fork();
// program after this instruction printf("hello\n"); int main()
fork(); return 0; {
} int pid;
printf("Hello world!\n"); pid=fork();
return 0;
} if (pid==0) {
printf("I am the child\n");
printf("my pid=%d\n", getpid());
}

return 0;
}

9/3/2020 23
A Tree of Processes

9/3/2020 24
Process Termination
• Process executes last statement and asks the operating system to delete it (exit)
o Output data from child to parent (via wait)
o Process’ resources are deallocated by operating system
• Parent may terminate execution of children processes (abort)
o Child has exceeded allocated resources
o Task assigned to child is no longer required
o If parent is exiting
−Some operating system do not allow child to continue if its parent
terminates
• All children terminated - cascading termination

9/3/2020 25
Interprocess Communication
• Processes within a system may be independent or cooperating
• Cooperating process can affect or be affected by other processes, including
sharing data
• Reasons for cooperating processes:
o Information sharing
o Computation speedup
o Modularity
o Convenience
• Cooperating processes need interprocess communication (IPC)
• Two models of IPC
o Shared memory
o Message passing

9/3/2020 26
Communications Models

9/3/2020 27
Cooperating Processes
• Independent process cannot affect or be affected by the execution of another
process
• Cooperating process can affect or be affected by the execution of another
process
• Advantages of process cooperation
o Information sharing
o Computation speed-up
o Modularity
o Convenience

9/3/2020 28
Producer-Consumer Problem

• Paradigm for cooperating processes, producer process


produces information that is consumed by a consumer
process
o unbounded-buffer places no practical limit on the size of the
buffer
o bounded-buffer assumes that there is a fixed buffer size

9/3/2020 29
Bounded-Buffer –
Shared-Memory Solution
• Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;

• Solution is correct, but can only use BUFFER_SIZE-1


elements
9/3/2020 30
Bounded-Buffer – Producer

while (true) {
/* Produce an item */
while (((in = (in + 1) % BUFFER SIZE count) == out)
; /* do nothing -- no free buffers */
buffer[in] = item;
in = (in + 1) % BUFFER SIZE;
}

9/3/2020 31
Bounded Buffer – Consumer

while (true) {
while (in == out)
; // do nothing -- nothing to consume

// remove an item from the buffer


item = buffer[out];
out = (out + 1) % BUFFER SIZE;
return item;
}

9/3/2020 32
Interprocess Communication –
Message Passing
• Mechanism for processes to communicate and to synchronize their
actions
• Message system – processes communicate with each other without
resorting to shared variables
• IPC facility provides two operations:
o send(message) – message size fixed or variable
o receive(message)
• If P and Q wish to communicate, they need to:
o establish a communication link between them
o exchange messages via send/receive
• Implementation of communication link
o physical (e.g., shared memory, hardware bus)
o logical (e.g., logical properties)

9/3/2020 33
Implementation Questions
• How are links established?
• Can a link be associated with more than two processes?
• How many links can there be between every pair of communicating
processes?
• What is the capacity of a link?
• Is the size of a message that the link can accommodate fixed or
variable?
• Is a link unidirectional or bi-directional?

9/3/2020 34
Direct Communication
• Processes must name each other explicitly:
o send (P, message) – send a message to process P
o receive(Q, message) – receive a message from process Q

• Properties of communication link


o Links are established automatically
o A link is associated with exactly one pair of communicating processes
o Between each pair there exists exactly one link
o The link may be unidirectional, but is usually bi-directional

9/3/2020 35
Indirect Communication
• Messages are directed and received from mailboxes (also referred to as ports)
o Each mailbox has a unique id
o Processes can communicate only if they share a mailbox

• Properties of communication link


o Link established only if processes share a common mailbox
o A link may be associated with many processes
o Each pair of processes may share several communication links
o Link may be unidirectional or bi-directional

9/3/2020 36
Indirect Communication

• Operations
o create a new mailbox
o send and receive messages through mailbox
o destroy a mailbox

• Primitives are defined as:


send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from mailbox A

9/3/2020 37
Indirect Communication
• Mailbox sharing
o P1, P2, and P3 share mailbox A
o P1, sends; P2 and P3 receive
o Who gets the message?

• Solutions
o Allow a link to be associated with at most two processes
o Allow only one process at a time to execute a receive operation
o Allow the system to select arbitrarily the receiver. Sender is notified who
the receiver was.

9/3/2020 38
Synchronization

• Message passing may be either blocking or non-blocking

• Blocking is considered synchronous


o Blocking send has the sender block until the message is received
o Blocking receive has the receiver block until a message is available

• Non-blocking is considered asynchronous


o Non-blocking send has the sender send the message and continue
o Non-blocking receive has the receiver receive a valid message or null

9/3/2020 39
Buffering

• Queue of messages attached to the link; implemented in one of


three ways
1. Zero capacity – 0 messages
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits

9/3/2020 40
Examples of IPC Systems - POSIX
• POSIX Shared Memory
o Process first creates shared memory segment
segment id = shmget(IPC PRIVATE, size, S IRUSR | S
IWUSR);
o Process wanting access to that shared memory must attach to it
shared memory = (char *) shmat(id, NULL, 0);
o Now the process could write to the shared memory
sprintf(shared memory, "Writing to shared memory");
o When done a process can detach the shared memory from its address space
shmdt(shared memory);

9/3/2020 41
Examples of IPC Systems - Mach

• Mach communication is message based


o Even system calls are messages
o Each task gets two mailboxes at creation- Kernel and Notify
o Only three system calls needed for message transfer
msg_send(), msg_receive(), msg_rpc()
o Mailboxes needed for commuication, created via
port_allocate()

9/3/2020 42
Examples of IPC Systems – Windows XP

• Message-passing centric via local procedure call (LPC) facility


o Only works between processes on the same system
o Uses ports (like mailboxes) to establish and maintain communication
channels
o Communication works as follows:
−The client opens a handle to the subsystem’s connection port object.
−The client sends a connection request.
−The server creates two private communication ports and returns the handle to one of
them to the client.
−The client and server use the corresponding port handle to send messages or
callbacks and to listen for replies.

9/3/2020 43
Local Procedure Calls in Windows XP

9/3/2020 44
Communications in Client-Server Systems

• Sockets

• Remote Procedure Calls

• Pipes

• Remote Method Invocation

9/3/2020 45
Sockets

• A socket is defined as an endpoint for communication

• Concatenation of IP address and port

• The socket 161.25.19.8:1625 refers to port 1625 on host


161.25.19.8

• Communication consists between a pair of sockets

9/3/2020 46
Socket Communication

9/3/2020 47
Remote Procedure Calls

• Remote procedure call (RPC) abstracts procedure calls between


processes on networked systems

• Stubs – client-side proxy for the actual procedure on the server

• The client-side stub locates the server and marshalls the parameters

• The server-side stub receives this message, unpacks the marshalled


parameters, and performs the procedure on the server

9/3/2020 48
Execution of RPC

9/3/2020 49
Pipes

• Acts as a conduit allowing two processes to communicate

• Issues
o Is communication unidirectional or bidirectional?
o In the case of two-way communication, is it half or full-duplex?
o Must there exist a relationship (i.e. parent-child) between the
communicating processes?
o Can the pipes be used over a network?

9/3/2020 50
Ordinary Pipes
• Ordinary Pipes allow communication in standard producer-
consumer style

• Producer writes to one end (the write-end of the pipe)

• Consumer reads from the other end (the read-end of the pipe)

• Ordinary pipes are therefore unidirectional

• Require parent-child relationship between communicating


processes

9/3/2020 51
Ordinary Pipes

9/3/2020 52
Named Pipes
• Named Pipes are more powerful than ordinary pipes

• Communication is bidirectional

• No parent-child relationship is necessary between the communicating


processes

• Several processes can use the named pipe for communication

• Provided on both UNIX and Windows systems

9/3/2020 53
2.2 Threads : Outline
• Overview
• Multithreading Models
• Thread Libraries
• Threading Issues
• Operating System Examples
• Windows XP Threads
• Linux Threads

9/3/2020 54
Thread Overview
• Threads are mechanisms that permit an application to perform multiple tasks
concurrently.

• Thread is a basic unit of CPU utilization


o Thread ID
o Program counter
o Register set
o Stack

• A single program can contain multiple threads


o Threads share with other threads belonging to the same process
−Code, data, open files…….

9/3/2020 55
Single and Multithreaded Processes

Traditional (heavyweight) process has a single


thread of control

heavyweight process lightweight process

9/3/2020 56
Threads in Memory
Memory is allocated for a process in segments or parts:
argv, environment
Stack for main thread
Increasing virtual address

Stack for thread 1


Stack for thread 2
Stack for thread 3

Heap
Un-initialized data
Initialized data
Text ( program code)

Thread 1 executing here


Main thread executing here
Thread 3 executing here
Thread 2 executing here
0000000
9/3/2020 57
Threads
Threads specific
Threads share…. Attributes….

• Global memory  Thread ID


 Thread specific data
• Process ID and parent process ID  CPU affinity
• Controlling terminal  Stack (local variables and function call
linkage information)
• Process credentials (user )
• Open file information
• Timers

9/3/2020 58
Benefits
 Responsiveness
Interactive application can delegate background functions to a thread and keep running

 Resource Sharing
Several different threads can access the same address space

 Economy
Allocating memory and new processes is costly. Threads are much ‘cheaper’ to initiate.

 Scalability
Use threads to take advantage of multiprocessor architecture

9/3/2020 59
Multithreaded Server Architecture

thread

thread

thread

thread

9/3/2020 60
Multicore Programming

Concurrent Execution on a Single-core System

Parallel Execution on a Multicore System

9/3/2020 61
Multicore Programming
• Multicore systems putting pressure on programmers, challenges include
o Dividing activities
−What tasks can be separated to run on different processors
o Balance
−Balance work on all processors
o Data splitting
−Separate data to run with the tasks
o Data dependency
−Watch for dependences between tasks
o Testing and debugging
−Harder!!!!

9/3/2020 62
Threads Assist Multicore Programming

thread

thread

Let threads on different CPUs thread

thread

9/3/2020 63
Multithreading Models
Support provided at either

 User level -> user threads


Supported above the kernel and managed without kernel support

 Kernel level -> kernel threads


Supported and managed directly by the operating system

What is the relationship between user and kernel threads?

9/3/2020 64
User Threads
• Thread management done by user-level threads library

• Three primary thread libraries:


o POSIX Pthreads
o Win32 threads
o Java threads

9/3/2020 65
Kernel Threads
• Supported by the Kernel

• Examples
o Windows XP/2000
o Solaris
o Linux
o Tru64 UNIX
o Mac OS X

9/3/2020 66
Multithreading Models

User Thread – to - Kernel Thread

• Many-to-One

• One-to-One

• Many-to-Many

9/3/2020 67
Many-to-One

Many user-level threads mapped


to single kernel thread

9/3/2020 68
One-to-One

Each user-level thread maps to kernel thread


Examples
 Windows NT/XP/2000
 Linux

9/3/2020 69
Many-to-Many Model

Allows many user level threads


to be mapped to many kernel
threads

Allows the operating system to


create a sufficient number of
kernel threads
Example
 Windows NT/2000 with
the ThreadFiber package

9/3/2020 70
Thread Libraries

• Thread library provides programmer with API for creating and managing
threads

• Two primary ways of implementing


o Library entirely in user space
o Kernel-level library supported by the OS

• Three main thread libraries in use today:


o POSIC Pthreads
o Win32
o Java

9/3/2020 71
Thread Libraries

• Three main thread libraries in use today:


o POSIX Pthreads
−May be provided either as user-level or kernel-level
−A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization
−API specifies behavior of the thread library, implementation is up to development of the
library
o Win32
−Kernel-level library on Windows system
o Java
−Java threads are managed by the JVM
−Typically implemented using the threads model provided by underlying OS

9/3/2020 72
POSIX Compilation on Linux

On Linux, programs that use the Pthreads API must be compiled with
–pthread or –lpthread

9/3/2020 73
POSIX: Thread Creation

#include <pthread.h>

pthread_create (thread, attr, start_routine, arg)

returns : 0 on success, some error code on failure.

9/3/2020 74
POSIX: Thread ID

#include <pthread.h>

pthread_t pthread_self()

returns : ID of current (this) thread

9/3/2020 75
POSIX: Wait for Thread Completion
#include <pthread.h>

pthread_join (thread, NULL)

 returns : 0 on success, some error code on failure.

9/3/2020 76
POSIX: Thread Termination

#include <pthread.h>

Void pthread_exit (return_value)

Threads terminate in one of the following ways:


 The thread's start functions performs a return specifying a return value for the thread.
 Thread receives a request asking it to terminate using pthread_cancel()
 Thread initiates termination pthread_exit()
 Main process terminates

9/3/2020 77
Thread Cancellation

Terminating a thread before it has finished

Having made the cancellation request, pthread_cancel()


returns immediately; that is it does not wait for the target
thread to terminate.

So, what happens to the target thread?


What happens and when it happens depends on the thread’s
cancellation state and type.

9/3/2020 78
Thread Cancellation

Thread attributes that indicate cancellation state and type

Two general states


 Thread can not be cancelled.
−PTHREAD_CANCEL_DISABLE

 Thread can be cancelled


−PTHREAD_CANCEL_ENABLE
−Default

9/3/2020 79
Thread Cancellation

When thread is cancelable, there are two general types

 Asynchronous cancellation terminates the target thread


immediately
−PTHREAD_CANCEL_ASYNCHRONOUS

 Deferred cancellation allows the target thread to periodically


check if it should be cancelled
−PTHREAD_CANCEL_DEFERRED
−Cancel when thread reaches ‘cancellation point’

9/3/2020 80
Thread Termination: Cleanup Handlers

Functions automatically executed when the thread is canceled.

Clean up global variables


Unlock and code or data held by thread
Close files
Commit or rollback transactions

Add and remove cleanup handlers based on thread logic


pthread_cleanup_push()
pthread_cleanup_pop()

9/3/2020 81
Threading Issues

• Semantics of fork() and exec() system calls


• Signal handling
• Thread pools
• Thread safety
• Thread-specific data

9/3/2020 82
Semantics of fork() ,exec(), exit()
Does fork() duplicate only the calling thread or all threads?

Threads and exec()


With exec(), the calling program is replaced in memory. All threads, except the once
calling exec(), vanish immediately. No thread-specific data destructors or cleanup
handlers are executed.

Threads and exit()


If any thread calls exit() or the main thread does a return, ALL threads immediately
vanish. No thread-specific data destructors or cleanup handlers are executed.

9/3/2020 83
Semantics of fork() ,exec(), exit()
 Threads and fork()
When a multithread process calls fork(), only the calling thread is replicated. All other threads
vanish in the child. No thread-specific data destructors or cleanup handlers are executed.

Problems:
 The global data may be inconsistent:
−Was another thread in the process of updating it?
−Data or critical code may be locked by another thread. That lock is copied into child process, too.
 Memory leaks
 Thread (other) specific data not available

Recommendation: In multithreaded application, only use fork() after exec()

9/3/2020 84
Signal Handling

• Signals are used in UNIX systems to notify a process that a


particular event has occurred
• A signal handler is used to process signals
1. Signal is generated by particular event
2. Signal is delivered to a process
3. Signal is handled
• Options:
o Deliver the signal to the thread to which the signal applies
o Deliver the signal to every thread in the process
o Deliver the signal to certain threads in the process
o Assign a specific threa to receive all signals for the process

9/3/2020 85
Thread Pools

• Create a number of threads in a pool where they await work


• Advantages:
o Usually slightly faster to service a request with an existing thread than
create a new thread
o Allows the number of threads in the application(s) to be bound to the size
of the pool

9/3/2020 86
Thread Safety
A function is thread-safe if it can be safely invoked by multiple threads at
the same time.
Example of a not safe function:
static int glob = 0;

static void Incr (int loops)


{
int loc, j;
for (j = 0; j<loops; j++ {
loc = glob;
loc++;
glob = loc;
}
}
Employs global or static values that are shared by all threads
9/3/2020 87
Thread Safety
Ways to render a function thread safe

 Serialize the function


 Lock the code to keep other threads out
 Only one thread can be in the sensitive code at a time
 Lock only the critical sections of code
 Only let one thread update the variable at a time.
 Use only thread safe system functions
 Make function reentrant
 Avoid the use of global and static variables
 Any information required to be safe is stored in buffers allocated by the call

9/3/2020 88
Thread Specific Data

• Makes existing functions thread-safe .


o May be slightly less efficient than being reentrant

• Allows each thread to have its own copy of data


o Provides per-thread storage for a function

• Useful when you do not have control over the thread creation
process (i.e., when using a thread pool)

9/3/2020 89
POSIX compliation
On Linux, programs that use the Pthreads API must be compiled with
–pthread. The effects of this option include the following:

 _REENTRANT preprocessor macro is defined. This causes the declaration of a few reentrant
functions to be exposed.

 The program is linked with the libpthread library


(the equivalent of -lpthread )
 The precise options for compiling a multithreaded program vary across implementations (and
compilers).

From M. Kerrisk, The Linux Programming Interface

http://www.yolinux.com/TUTORIALS/LinuxTutorialPosixThreads.html
http://codebase.eu/tutorial/posix-threads-c/

9/3/2020 90
Threads vs. Processes
• Advantages of multithreading
o Sharing between threads is easy
o Faster creation
• Disadvantages of multithreading
o Ensure threads-safety
o Bug in one thread can bleed to other threads, since they share the same address
space
o Threads must compete for memory
• Considerations
o Dealing with signals in threads is tricky
o All threads must run the same program
o Sharing of files, users, etc

9/3/2020 92
Operating System Examples
• Windows XP Threads
• Linux Thread

9/3/2020 93
Windows XP Threads

• Implements the one-to-one mapping, kernel-level


• Each thread contains
o A thread id
o Register set
o Separate user and kernel stacks
o Private data storage area
• The register set, stacks, and private storage area are known as
the context of the threads
• The primary data structures of a thread include:
o ETHREAD (executive thread block)
o KTHREAD (kernel thread block)
o TEB (thread environment block)

9/3/2020 94
Windows XP Threads

9/3/2020 95
Linux Threads

• Linux refers to them as tasks rather than threads

• Thread creation is done through clone() system call

• clone() allows a child task to share the address space


of the parent task (process)

9/3/2020 96
Linux Threads

9/3/2020 97
References

• Silberschatz A., Galvin P., Gagne G, “Operating Systems Concepts”, VIIIth Edition, Wiley, 2011.

• UKEssays. (November 2018). Compare cpu scheduling of linux and windows. Retrieved from
https://www.ukessays.com/essays/information-systems/compare-cpu-scheduling-of-linux-and-
windows.php?vref=1

• Wikipedia (August 2020). Completely Fair Scheduler. Retrieved from


https://en.wikipedia.org/wiki/Completely_Fair_Scheduler

9/3/2020 98
THANK YOU

9/3/2020 Silberschatz A., Galvin P., Gagne G, “Operating Systems Concepts”, VIIIth Edition, Wiley, 2011. 99

You might also like