Download as pdf or txt
Download as pdf or txt
You are on page 1of 50

NATIONAL UNIVERSITY LAGUNA

College of Computer Studies

CCOPSYSL-OPERATING SYSTEMS
Activity Sheet Week 3 and Week 4
Name: (Last Name, First Name) Date: (Date of Submission)
(1)
(2)
(3)
Program/Section: Instructor: Mr. Reynaldo E. Castillo
Topic: Processes and Threads
Instructions:
1. Fill out the needed information above.
2. Form a group with 3 to 4 members.
3. Select your group leader.
2. Study and perform the tasks below as a group.
3. Perform imbedded instructions and answer the given questions below.
4. Save your activity sheet as PDF with file name
Week_3_4_GroupLeaderLastName_FirstName.pdf
5. Submit via:
o Student Submission
 Week 3_4
 LastName1_LastName2_LastName3 (create this folder)
o PDF (Activity Sheet Week 3 and Week 4, submit until May 5)
o Program Files (Task 2 Programs, Presentation on April 30)
6. Submission and Presentation
 Task 2 Output Presentation is on April 30, 2021, Recorded as Interim
 Submission of your PDF and Program files is on or before May 5, 2021.
7. Online Quiz about Week 2 and Week 3_4 topics will be opened from April 30, 2021
(after the presentation) to May 6, 2021. Quiz link will be provided via MS Teams
Announcement.
8. In case, all screen shots should reflect the step you are performing and your student
information. Please sample below:

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
List of Activities
 Task 1 – Terms Related with Processes and Threads
 Task 2 – Process/CPU Scheduling Algorithm

At the end of the end of this activity sheet, the student should be able to:
 Analyze the tradeoffs inherent in operating system design
 Learn mechanism of OS threads
 Learn the mechanism involved in memory management

Process Management
Process management involves various tasks like creation, scheduling, termination of processes,
and a dead lock. Process is a program that is under execution, which is an important part of
modern-day operating systems. The OS must allocate resources that enable processes to share
and exchange information. It also protects the resources of each process from other methods and
allows synchronization among processes.

Processes
Early computers allowed only one program to be executed at a time. This program had complete
control of the system and had access to all the system’s resources. In contrast, contemporary
computer systems allow multiple programs to be loaded into memory and executed concurrently.

This evolution required firmer control and more compartmentalization of the various programs;
and these needs resulted in the notion of a process, which is a program in execution. A process is
the unit of work in a modern computing system. The more complex the operating system is, the
more it is expected to do on behalf of its users. Although its main concern is the execution of
user programs, it also needs to take care of various system tasks that are best done in user space,
rather than within the kernel. A system, therefore, consists of a collection of processes, some
executing user code, others executing operating system code.

Potentially, all these processes can execute concurrently, with the CPU (or CPUs) multiplexed
among them. In this chapter, you will read about what processes are, how they are represented in
an operating system, and how they work.

Process Concept

 An operating system executes a variety of programs that run as a process.


 Process – a program in execution; process execution must progress in a sequential
fashion. No parallel execution of instructions of a single process
 Threads - the unit of execution within a process. A process can have one or more
threads. A process with a single thread executes only one task at a time, while a
multithreaded process can execute a task per thread.
 Multiple parts
o The program code also called text section
o Current activity including program counter, processor registers

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
o Stack containing temporary data
 Function parameters, return addresses, local variables
o Data section containing global variables
o Heap containing memory dynamically allocated during run time

The layout of a process in memory


 Program is a passive entity stored on disk (executable file); the process is active
o The program becomes process when an executable file is loaded into memory
 Execution of the program started via GUI mouse clicks, command line entry of its name,
etc.
 One program can be several processes
o Consider multiple users executing the same program

Process State

 As a process executes, it changes state


o New: The process is being created
o Running: Instructions are being executed
o Waiting: The process is waiting for some event to occur
o Ready: The process is waiting to be assigned to a processor
o Terminated: The process has finished execution

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Diagram of a process state

Process Control Block

Each process is represented in the operating system by a process control block (PCB)—also
called a task control block.

Process Control Block


 Process number - process ID is the unique number representing a process
 Process state – running, waiting, etc.
 Program counter – location of instruction to next execute
 CPU registers – contents of all process-centric registers
 CPU scheduling information- priorities, scheduling queue pointers
 Memory-management information – memory allocated to the process
 Accounting information – CPU used, clock time elapsed since start, time limits
 I/O status information – I/O devices allocated to process, list of open files

Process Scheduling

 The goal -- the objective of multiprogramming is to have some process running at all
times to maximize CPU utilization.
 The objective of time sharing is to switch the CPU among processes so frequently that
users can interact with each program while it is running.
 To meet these objectives, the process scheduler is applied.
 Process scheduler selects among available processes for the next execution on the CPU
core
 Maintains scheduling queues of processes

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
o Job queue - as processes enter the system, they are put into a job queue, which
consists of all processes in the system
o Ready queue – set of all processes residing in main memory, ready and waiting
to execute
o Wait queues – set of processes waiting for an event (i.e., I/O)
o Processes migrate among the various queues

Queueing-diagram representation of process scheduling

Context Switch

 Interrupts cause the OS to change a CPU from its current task and to run a kernel
routine.
 Such operations happen frequently on a general-purpose system.
 When CPU switches to another process, the system must save the state of the old process
and load the saved state for the new process via a context switch
 Context of a process represented in the PCB
 Context-switch time is pure overhead; the system does no useful work while switching
o The more complex the OS and the PCB - the longer the context switch
 Time-dependent on hardware support
o Some hardware provides multiple sets of registers per CPU - multiple contexts
loaded at once

Operations on Processes

 A process may create several new processes via create- process system call during the
course of execution.

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 The system must provide mechanisms for:
o Process creation
o Process termination
Process Creation
 The creating process is called the parent process, and the new processes are called the
children of that process.
 Parent process creates children processes, which, in turn, create other processes,
forming a tree of processes structure
 Generally, the process identified and managed via a process identifier (PID)
 Resource sharing options
o Parent and children share all resources
o Children share a subset of parent’s resources
o Parent and child share no resources
 Execution options
o Parent and children execute concurrently
o Parent waits until children terminate
 Address space
o Child duplicate of the parent (same data and program as the parent)
o The child has a program loaded into it
 UNIX examples
o fork() system call creates a new process
o exec() system call used after a fork() to replace the process’ memory space with a
new program
o Parent process calls wait()waiting for the child to terminate

Process creation using the fork() system call

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
A tree of processes on a typical Linux system
Process Termination
 The process executes the last statement and then asks the operating system to delete it
using the exit() system call.
o Returns status data from child to parent (via wait())
o Process’ resources are deallocated by the operating system
 The parent may terminate the execution of children processes using the abort() system
call. Some reasons for doing so:
o The child has exceeded allocated resources
o The task assigned to the child is no longer required
o The parent is exiting, and the operating systems do not allow a child to continue if
its parent terminates
 Some operating systems do not allow a child to exists if its parent has terminated. If a
process terminates, then all its children must also be terminated.
o cascading termination. All children, grandchildren, etc., are terminated.
o The termination is initiated by the operating system.
 The parent process may wait for the termination of a child process by using the
wait()system call. The call returns status information and the PID of the terminated
process
PID = wait(&status);
 If no parent waiting (did not invoke wait()) process is a zombie
 If parent terminated without invoking wait(), the process is an orphan

Android Process Importance Hierarchy

 Mobile operating systems often have to terminate processes to reclaim system resources
such as memory. From most to least important:
o Foreground process—The current process visible on the screen, representing the
application the user is currently interacting with
o Visible process—A process that is not directly visible on the foreground but that
is performing an activity that the foreground process is referring to (that is, a
process performing an activity whose status is displayed on the foreground
process)
o Service process—A process that is similar to a background process but is
performing an activity that is apparent to the user (such as streaming music)
o Background process—A process that may be performing an activity but is not
apparent to the user.
o Empty process—A process that holds no active components associated
with any application
 Android will begin terminating processes that are least important.

Interprocess Communication

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Processes executing concurrently in the operating system may be either independent processes
or cooperating processes. A process is independent if it does not share data with any other
processes executing in the system. A process is cooperating if it can affect or be affected by the
other processes executing in the system. Clearly, any process that shares data with other
processes is a cooperating process.
There are several reasons for providing an environment that allows process cooperation:
 Information sharing. Since several applications may be interested in the same piece of
information (for instance, copying and pasting), we must provide an environment to
allow concurrent access to such information.
 Computation speedup. If we want a particular task to run faster, we must break it into
subtasks, each of which will be executing in parallel with the others. Notice that such a
speedup can be achieved only if the computer has multiple processing cores.
 Modularity. We may want to construct the system in a modular fashion, dividing the
system functions into separate processes or threads
Cooperating processes require an interprocess communication (IPC) mechanism that will
allow them to exchange data— that is, send data to and receive data from each other. There are
two fundamental models of interprocess communication: shared memory and message
passing.

Communications models. (a) Shared memory. (b) Message passing.

Multiprocess Architecture – Chrome Browser

 Many web browsers ran as a single process (some still do)


 If one web site causes trouble, the entire browser can hang or crash
 Google Chrome Browser is multiprocess with 3 different types of processes:
o Browser process manages user interface, disk, and network I/O

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
o Renderer process renders web pages, deals with HTML, Javascript. A new
renderer created for each website opened
 Runs in sandbox restricting disk and network I/O, minimizing the effect
of security exploits
o Plug-in process for each type of plug-in

IPC – Shared Memory

 An area of memory shared among the processes that wish to communicate


 The communication is under the control of the users' processes, not the operating system.
 Major issues are to provide a mechanism that will allow the user processes to synchronize
their actions when they access shared memory.

Synchronization

Communication between processes takes place through calls to send() and receive() primitives.
There are different design options for implementing each primitive. Message passing may be
either blocking or nonblocking—also known as synchronous and asynchronous.
 Blocking send. The sending process is blocked until the message is received by the
receiving process or by the mailbox.
 Nonblocking send. The sending process sends the message and resumes operation.
 Blocking receive. The receiver blocks until a message is available.
 Nonblocking receive. The receiver retrieves either a valid message or a null.

Buffering

Whether the communication is direct or indirect, messages exchanged by communicating


processes reside in a temporary queue. Basically, such queues can be implemented in three ways:
 Zero capacity. The queue has a maximum length of zero; thus, the link cannot have any
messages waiting in it. In this case, the sender must block until the recipient receives the
message.
 Bounded capacity. The queue has finite length n; thus, at most n messages can reside in
it. If the queue is not full when a new message is sent, the message is placed in the queue
(either the message is copied or a pointer to the message is kept), and the sender can
continue execution without waiting. The link’s capacity is finite, however. If the link is
full, the sender must block until space is available in the queue.

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 Unbounded capacity. The queue’s length is potentially infinite; thus, any number of
messages can wait in it. The sender never blocks.
The zero-capacity case is sometimes referred to as a message system with no buffering. The
other cases are referred to as systems with automatic buffering.

Direct Communication

 Processes must name each other explicitly:


o send (P, message) – send a message to process P
o receive(Q, message) – receive a message from process Q
 Properties of the communication link
o Links are established automatically
o A link is associated with exactly one pair of communicating processes
o Between each pair, there exists exactly one link
o The link may be unidirectional, but is usually bi-directional

Indirect Communication

 Messages are directed and received from mailboxes (also referred to as ports)
o Each mailbox has a unique id
o Processes can communicate only if they share a mailbox
 Properties of a communication link
o Link established only if processes share a common mailbox
o A link may be associated with many processes
o Each pair of processes may share several communication links
o The link may be unidirectional or bi-directional
 Operations
o Create a new mailbox (port)
o Send and receive messages through a mailbox
o Delete a mailbox
 Primitives are defined as:
o send(A, message) – send a message to mailbox A
o receive(A, message) – receive a message from mailbox A
 Mailbox sharing
o P1, P2, and P3 share mailbox A
o P1 sends; P2 and P3 receive
o Who gets the message?
 Solutions
o Allow a link to be associated with at most two processes
o Allow only one process at a time to execute a receive operation
o Allow the system to select arbitrarily the receiver. The sender is notified of who
the receiver was.

Pipes

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 Acts as a conduit allowing two processes to communicate
 Issues:
o Is communication unidirectional or bidirectional?
o In the case of two-way communication, is it half or full-duplex?
o Must there exist a relationship (i.e., parent-child) between the communicating
processes?
o Can the pipes be used over a network?
 Ordinary pipes – cannot be accessed from outside the process that created it. Typically,
a parent process creates a pipe and uses it to communicate with a child process that it
created.
 Named pipes – can be accessed without a parent-child relationship.

Ordinary Pipes

 Ordinary Pipes allow communication in standard producer-consumer style


 Producer writes to one end (the write-end of the pipe)
 Consumer reads from the other end (the read-end of the pipe)
 Ordinary pipes are therefore unidirectional
 Require parent-child relationship between communicating processes

File descriptors for an ordinary pipe


 Windows call these anonymous pipes

Named Pipes

 Named Pipes are more powerful than ordinary pipes


 Communication is bidirectional
 No parent-child relationship is necessary between the communicating processes
 Several processes can use the named pipe for communication
 Provided on both UNIX and Windows systems

Communications in Client-Server Systems

 Sockets
 Remote Procedure Calls
Sockets

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 A socket is defined as an endpoint for communication
 Concatenation of IP address and port – a number included at the start of message packet
to differentiate network services on a host
 The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8
 Communication consists of a pair of sockets
 All ports below 1024 are well known, used for standard services
 The special IP address 127.0.0.1 (loopback) to refer to the system on which process is
running

Communication using sockets


Sockets in Java
 Three types of sockets
o Connection-oriented (TCP)
o Connectionless (UDP)
o MulticastSocket class– data can be sent to multiple recipients
 Consider this “Date” server in Java:

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Date server

Remote Procedure Calls

 Remote procedure call (RPC) abstracts procedure calls between processes on networked
systems
o Again uses ports for service differentiation
 Stubs – a client-side proxy for the actual procedure on the server
 The client-side stub locates the server and marshalls the parameters
 The server-side stub receives this message, unpacks the marshaled parameters, and
performs the procedure on the server
 On Windows, stub code compile from a specification written in Microsoft Interface
Definition Language (MIDL)
 Data representation handled via External Data Representation (XDL) format to
account for different architectures
o Big-endian and little-endian
 Remote communication has more failure scenarios than local
o Messages can be delivered exactly once rather than at most once
 OS typically provides a rendezvous (or matchmaker) service to connect client and server

Threads

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Overview
A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter (PC), a
register set, and a stack. It shares with other threads belonging to the same process its code
section, data section, and other operating-system resources, such as open files and signals. A
traditional process has a single thread of control. If a process has multiple threads of control, it
can perform more than one task at a time.
Motivation
 Most modern applications are multithreaded
 Threads run within the application
 Multiple tasks with the application can be implemented by separate threads
o Update display
o Fetch data
o Spell checking
o Answer a network request
 Process creation is heavy-weight while thread creation is light-weight
 Can simplify code, increase efficiency
 Kernels are generally multithreaded

Single-threaded and multithreaded processes

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Multithreaded server architecture

Benefits

 Responsiveness – may allow continued execution if part of the process is blocked,


especially important for user interfaces
 Resource Sharing – threads share resources of process, easier than shared memory or
message passing
 Economy – cheaper than process creation, thread switching lower overhead than context
switching
 Scalability – the process can take advantage of multicore architectures

Multicore Programming

 Multi core or multiprocessor systems putting pressure on programmers, challenges


include:
o Dividing activities
o Balance
o Data splitting
o Data dependency
o Testing and debugging
 Parallelism implies a system can perform more than one task simultaneously
 Concurrency supports more than one task making progress
o Single processor/core, scheduler providing concurrency

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 Types of parallelism
o Data parallelism – distributes subsets of the same data across multiple cores,
same operation on each
o Task parallelism – distributing threads across cores, each thread performing a
unique operation

Data and task parallelism

Amdahl’s Law

 Identifies performance gains from adding additional cores to an application that has both
serial and parallel components

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 S is a serial portion
 N processing cores

 That is, if the application is 75% parallel / 25% serial, moving from 1 to 2 cores results in
a speedup of 1.6 times
 As N approaches infinity, speedup approaches 1 / S

A serial portion of an application has a disproportionate effect on performance gained by


adding additional cores
 But does the law take into account contemporary multicore systems?

User Threads and Kernel Threads

 User threads - management is done by user-level threads library


 Kernel threads - Supported by the Kernel

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 Examples – virtually all general-purpose operating systems, including:
o Windows
o Linux
o Mac OS X
o iOS
o Android

User and Kernel Threads

Multithreading Models

 Many-to-One
 One-to-One
 Many-to-Many
Many-to-One
 Many user-level threads mapped to a single kernel thread
 One thread blocking causes all to block
 Multiple threads may not run in parallel on a multicore system because only one may be
in the kernel at a time
 Few systems currently use this model
 Examples:
o Solaris Green Threads
o GNU Portable Threads

One-to-One

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 Each user-level thread maps to a kernel thread
 Creating a user-level thread creates a kernel thread
 More concurrency than many-to-one
 Number of threads per process sometimes restricted due to overhead
 Examples
o Windows
o Linux

Many-to-Many Model
 Allows many user-level threads to be mapped to many kernel threads
 Allows the operating system to create a sufficient number of kernel threads
 Windows with the ThreadFiber package
 Otherwise not very common

Two-level Model
 Similar to M:M, except that it allows a user thread to be bound to a kernel thread

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Threading Issues

 Semantics of fork() and exec() system calls


 fork() system call is used to create a separate, duplicate process
 when an exec() system is invoked, the specified parameter in the exec() will replace the
entire process - including all threads.
 Signal handling
o Synchronous and asynchronous
 Thread cancellation of the target thread
o Asynchronous or deferred
 Thread-local storage
 Scheduler Activations

Signal Handling

 Signals are used in UNIX systems to notify a process that a particular event has occurred.
 A signal handler is used to process signals
1. Signal is generated by a particular event
2. Signal is delivered to a process
3. Signal is handled by one of two signal handlers:

o default
o user-defined
 Every signal has default handler that kernel runs when handling signal
 The user-defined signal handler can override the default
 For single-threaded, the signal delivered to process
 Where should a signal be delivered for multi-threaded?
o Deliver the signal to the thread to which the signal applies
o Deliver the signal to every thread in the process
o Deliver the signal to certain threads in the process

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
o Assign a specific thread to receive all signals for the process

Thread Cancellation

 Terminating a thread before it has finished


 Thread to be canceled is target thread
 Two general approaches:
o Asynchronous cancellation terminates the target thread immediately
o Deferred cancellation allows the target thread to periodically check if it should
be canceled
 Invoking thread cancellation requests cancellation, but actual cancellation depends on
thread state

 If a thread has cancellation disabled, cancellation remains pending until thread enables it
 The default type is deferred
 Cancellation only occurs when a thread reaches cancellation point
o i.e., pthread_testcancel()
o Then cleanup handler is invoked
 On Linux systems, thread cancellation is handled through signals

Thread-Local Storage

 Thread-local storage (TLS) allows each thread to have its own copy of data
 Useful when you do not have control over the thread creation process (i.e., when using a
thread pool)
 Different from local variables
o Local variables visible only during single function invocation
o TLS visible across function invocations
 Similar to static data
o TLS is unique to each thread

Windows Threads

 Windows API – primary API for Windows applications


 Implements the one-to-one mapping, kernel-level
 Each thread contains
o A thread id

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
o Register set representing the state of the processor
o Separate user and kernel stacks for when the thread runs in user mode or kernel
mode
o A private data storage area used by run-time libraries and dynamic link libraries
(DLLs)
 The register set, stacks, and private storage area are known as the context of the thread
 The primary data structures of a thread include:
o ETHREAD (executive thread block) – includes a pointer to process to which
thread belongs and to KTHREAD, in kernel space
o KTHREAD (kernel thread block) – scheduling and synchronization info, kernel-
mode stack, pointer to TEB, in kernel space
o TEB (thread environment block) – thread id, user-mode stack, thread-local
storage, in user space

Windows Threads Data Structures

Linux Threads

 Linux refers to them as tasks rather than threads


 Thread creation is done through clone() system call

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 clone() allows a child task to share the address space of the parent task (process)
 Flags control behavior

 struct task_struct points to process data structures (shared or unique)


CPU Scheduling

CPU scheduling is the basis of multi-programmed operating systems. By switching the CPU
among processes, the operating system can make the computer more productive. This time, we
introduce basic CPU-scheduling concepts and present several CPU-scheduling algorithms,
including real-time systems.

We also consider the problem of selecting an algorithm for a particular system.


On modern operating systems it is kernel-level threads—not processes— that are in fact being
scheduled by the operating system. However, the terms "process scheduling" and "thread
scheduling" are often used interchangeably. We use process scheduling when discussing general
scheduling concepts and thread scheduling to refer to thread-specific ideas.

Similarly, in Module 1 we describe how a core is the basic computational unit of a CPU, and that
a process executes on a CPU’s core. However, in many instances in this module, when we use
the general terminology of scheduling a process to "run on a CPU", we are implying that the
process is running on a CPU’s core.

Basic Concepts

 Maximum CPU utilization obtained with multiprogramming


 CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O
wait
 CPU burst followed by I/O burst
 CPU burst distribution is of main concern

CPU – I/O Burst Cycle

 The success of CPU scheduling depends on an observed property of processes: process


execution consists of a cycle of CPU execution and I/O wait.

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 Process execution begins with a CPU burst. That is followed by an I/O burst, which is
followed by another CPU burst, then anotherI/O burst, and so on. Eventually, the final
CPU burst ends with a system request to terminate execution.

The alternating sequence of CPU and I/O bursts

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Histogram of CPU-burst durations

CPU Scheduler

The CPU scheduler selects from among the processes in the ready queue and allocates a CPU
core to one of them.
 The queue may be ordered in various ways
 CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
 For situations 1 and 4, there is no choice in terms of scheduling. A new process (if one
exists in the ready queue) must be selected for execution.
 For situations 2 and 3, however, there is a choice.

Preemptive and Non preemptive Scheduling

 Preemptive scheduling can result in race conditions when data are shared among several
processes.
 Consider the case of two processes that share data. While one process is updating the
data, it is preempted so that the second process can run. The second process then tries to
read the data, which are in an inconsistent state.

Dispatcher

 Dispatcher module gives control of the CPU to the process selected by the CPU
scheduler; this involves:

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
o Switching context
o Switching to user mode
o Jumping to the proper location in the user program to restart that program
 Dispatch latency – the time it takes for the dispatcher to stop one process and start
another running

Scheduling Criteria

 CPU utilization – keep the CPU as busy as possible


 Throughput – # of processes that complete their execution per time unit
 Turnaround time – the amount of time to execute a particular process
 Waiting time – the amount of time a process has been waiting in the ready queue
 Response time – the amount of time it takes from when a request was submitted until the
first response is produced.

Scheduling Algorithm Optimization Criteria

 Max CPU utilization


 Max throughput
 Min turnaround time
 Min waiting time
 Min response time

First- Come, First-Served (FCFS) Scheduling

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
By far the simplest CPU-scheduling algorithm is the first-come-first-serve (FCFS) scheduling
algorithm. With this scheme, the process that requests the CPU first is allocated to the CPU first.
The implementation of the FCFS policy is easily managed with a FIFO queue. When a process
enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is
allocated to the process at the head of the queue. The running process is then removed from the
queue.
Process Burst Time
P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order: P1 , P2 , P3 , the Gantt Chart for the
schedule is:

 Waiting time for P1 = 0; P2 = 24; P3 = 27


 Average waiting time: (0 + 24 + 27)/3 = 17

 Suppose that the processes arrive in the order: P2 , P3 , P1, the Gantt chart for the
schedule is:

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case
 Convoy effect - short process behind long process
o Consider one CPU-bound and many I/O-bound processes

Shortest-Job-First (SJF) Scheduling

 Associate with each process the length of its next CPU burst
o Use these lengths to schedule the process with the shortest time
 SJF is optimal – gives minimum average waiting time for a given set of processes
o The difficulty is knowing the length of the next CPU request
o Could ask the user
 Associate with each process the length of its next CPU burst
o Use these lengths to schedule the process with the shortest time
 SJF is optimal – gives minimum average waiting time for a given set of processes
 The preemptive version called shortest-remaining-time-first
 How do we determine the length of the next CPU burst?
o Could ask the user
o Estimate

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Example of SJF

Determining Length of Next CPU Burst

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Prediction of the Length of the Next CPU Burst

Round Robin (RR)

 Each process gets a small unit of CPU time (time quantum q), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added to the end
of the ready queue.
 If there are n processes in the ready queue and the time quantum is q, then each process
gets 1/n of the CPU time in chunks of at most q time units at once. No process waits for
more than (n-1)q time units.
 Timer interrupts every quantum to schedule the next process
 Performance
 q large Þ FIFO
 q small Þ q must be large with respect to context switch, otherwise overhead is too high

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Example of RR with Time Quantum = 4

Time Quantum and Context Switch Time

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Turnaround Time Varies With The Time Quantum

Priority Scheduling

 A priority number (integer) is associated with each process


 The CPU is allocated to the process with the highest priority (smallest integer º highest
priority)
o Preemptive
o Non preemptive
 SJF is a priority scheduling where priority is the inverse of predicted next CPU burst time
 Problem º Starvation – low priority processes may never execute
 Solution º Aging – as time progresses increase the priority of the process

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Example of Priority Scheduling

Priority Scheduling w/ Round-Robin

Thread Scheduling

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 The distinction between user-level and kernel-level threads
 When threads supported, threads scheduled, not processes
 Many-to-one and many-to-many models, thread library schedules user-level threads to
run on LWP
 Known as process-contention scope (PCS) since scheduling competition is within the
process
 Typically done via priority set by the programmer
 Kernel thread scheduled onto available CPU is system-contention scope (SCS) –
competition among all threads in the system

Multiple-Processor Scheduling

 CPU scheduling more complex when multiple CPUs are available


 Multiprocess may be any one of the following architectures:
o Multicore CPUs
o Multithreaded cores
o NUMA systems
o Heterogeneous multiprocessing
 Symmetric multiprocessing (SMP) is where each processor is self-scheduling.
 All threads may be in a common ready queue (a)
 Each processor may have its own private queue of threads

Organization of ready queues

Multicore Processors

 The recent trend to place multiple processor cores on the same physical chip
 Faster and consumes less power
 Multiple threads per core also growing

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 Takes advantage of memory stall to make progress on another thread while memory
retrieve happens

Memory stall

Multithreaded Multicore System

 Each core has > 1 hardware threads.


 If one thread has a memory stall, switch to another thread!

Multithreaded multicore system

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Multiple-Processor Scheduling – Load Balancing

 If SMP, need to keep all CPUs loaded for efficiency


 Load balancing attempts to keep workload evenly distributed
 Push migration – periodic task checks load on each processor, and if found pushes task
from overloaded CPU to other CPUs
 Pull migration – idle processors pull waiting for task from the busy processor

Multiple-Processor Scheduling – Processor Affinity

 When a thread has been running on one processor, the cache contents of that processor
stores the memory accesses by that thread.
 We refer to this as a thread having an affinity for a processor (i.e., “processor affinity”)
 Load balancing may affect processor affinity as a thread may be moved from one
processor to another to balance loads, yet that thread loses the contents of what it had in
the cache of the processor it was moved off of.
 Soft affinity – the operating system attempts to keep a thread running on the same
processor, but no guarantees.
 Hard affinity – allows a process to specify a set of processors it may run on.

Real-Time CPU Scheduling

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 Can present obvious challenges
 Soft real-time systems – Critical real-time tasks have the highest priority, but no
guarantee as to when tasks will be scheduled
 Hard real-time systems – task must be serviced by its deadline
 Event latency – the amount of time that elapses from when an event occurs to when it is
serviced.
 Two types of latencies affect performance
1.
1. Interrupt latency – the time from the arrival of interrupt to start of a routine that
services interrupt
2. Dispatch latency – time for schedule to take current process off CPU and switch
to another

Priority-based Scheduling

 For real-time scheduling, the scheduler must support preemptive, priority-based


scheduling
o But only guarantees soft real-time
 For hard real-time must also provide the ability to meet deadlines
 Processes have new characteristics: periodic ones require CPU at constant intervals
o Has processing time t, deadline d, period p
o 0≤t≤d≤p
o Rate of the periodic task is 1/p

Linux Scheduling Through Version 2.5

 Prior to kernel version 2.5, ran a variation of standard UNIX scheduling algorithm
 Version 2.5 moved to constant order O(1) scheduling time
 Preemptive, priority-based
o Two priority ranges: time-sharing and real-time
o Real-time range from 0 to 99 and nice value from 100 to 140
o Map into global priority with numerically lower values indicating higher priority
o Higher priority gets larger q
o Task run-able as long as time left in time slice (active)
o If no time left (expired), not run-able until all other tasks use their slices
o All run-able tasks tracked in per-CPU runqueue data structure
 Two priority arrays (active, expired)
 Tasks indexed by priority
 When no more active, arrays are exchanged
 Worked well, but poor response times for interactive processes

Linux Scheduling in Version 2.6.23 +

 Completely Fair Scheduler (CFS)


 Scheduling classes

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
o Each has a specific priority
o Scheduler picks the highest priority task in the highest scheduling class
o Rather than quantum based on fixed time allotments, based on the proportion of
CPU time
o Two scheduling classes included, others can be added
1. default
2. real-time
 Quantum calculated based on a nice value from -20 to +19
o The lower value is a higher priority
o Calculates target latency – interval of time during which task should run at least
once
o Target latency can increase if say the number of active tasks increases
 CFS scheduler maintains per task virtual run time in variable vruntime
o Associated with decay factor based on the priority of task – lower priority is the
higher decay rate
o Normal default priority yields virtual run time = actual run time
 To decide the next task to run, scheduler picks task with the lowest virtual run time

Windows Scheduling

 Windows uses priority-based preemptive scheduling


 The highest-priority thread runs next
 Dispatcher is scheduler
 Thread runs until (1) blocks, (2) uses time slice, (3) preempted by a higher-priority thread
 Real-time threads can preempt non-real-time
 32-level priority scheme
 Variable class is 1-15, the real-time class is 16-31
 Priority 0 is the memory-management thread
 The queue for each priority
 If no run-able thread runs an idle thread

Algorithm Evaluation

 How to select the CPU-scheduling algorithm for an OS?


 Determine criteria, then evaluate algorithms
 Deterministic modeling
o Type of analytic evaluation
o Takes a particular predetermined workload and defines the performance of each
algorithm for that workload
 Consider 5 processes arriving at time 0.

Task 1: Instruction: Define the following key terms.


Note:

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 Please refrain from copying the definition directly from the internet. Read and rephrase
the source of your answer. Then you may include it as your term definition.
 Answer this activity individually.
 Two points for each correct and not copied answer from the internet.

Terms Related to Process Management. Insert your answer after each term.

1. Process - <Insert Here>


2. Passive entity - <Insert Here>
3. Active entity - <Insert Here>
4. Threads - <Insert Here>
5. Process scheduler - <Insert Here>
6. Ready queue - <Insert Here>
7. Wait queue - <Insert Here>
8. Context switch - <Insert Here>
9. Single foreground process - <Insert Here>
10. Multiple background processes - <Insert Here>
11. Parallelism - <Insert Here>
12. Concurrency - <Insert Here>
13. Thread library - <Insert Here>
14. CPU Scheduling - <Insert Here>
15. Quantum Slice - <Insert Here>

CPU Scheduling is a process of determining which process will own CPU for execution while
another process is on hold. The main task of CPU scheduling is to make sure that whenever the
CPU remains idle, the OS at least select one of the processes available in the ready queue for
execution. The selection process will be carried out by the CPU scheduler. It selects one of the
processes in memory that are ready for execution.

Types of CPU Scheduling


Here are two kinds of Scheduling methods:

1. Preemptive Scheduling
In Preemptive Scheduling, the tasks are mostly assigned with their priorities. Sometimes it is
important to run a task with a higher priority before another lower priority task, even if the lower
priority task is still running. The lower priority task holds for some time and resumes when the
higher priority task finishes its execution.

2. Non-Preemptive Scheduling
In this type of scheduling method, the CPU has been allocated to a specific process. The process
that keeps the CPU busy will release the CPU either by switching context or terminating. It is the
only method that can be used for various hardware platforms. That's because it doesn't need
special hardware (for example, a timer) like preemptive scheduling.

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
When scheduling is Preemptive or Non-Preemptive?
To determine if scheduling is preemptive or non-preemptive, consider these four parameters:

1. A process switches from the running to the waiting state.


2. Specific process switches from the running state to the ready state.
3. Specific process switches from the waiting state to the ready state.
4. Process finished its execution and terminated.

NOTES:

 Only conditions 1 and 4 apply, the scheduling is called non- preemptive.


 All other scheduling are preemptive.

Important CPU scheduling Terminologies

1. Burst Time/Execution Time: It is a time required by the process to complete execution. It


is also called running time.
2. Arrival Time: when a process enters in a ready state
3. Finish Time: when process complete and exit from a system
4. Multiprogramming: A number of programs which can be present in memory at the same
time.
5. Jobs: It is a type of program without any kind of user interaction.
6. User: It is a kind of program having user interaction.
7. Process: It is the reference that is used for both job and user.
8. CPU/IO burst cycle: Characterizes process execution, which alternates between CPU and
I/O activity. CPU times are usually shorter than the time of I/O.

CPU Scheduling Criteria


A CPU scheduling algorithm tries to maximize and minimize the following:

 Maximize:
o CPU utilization: CPU utilization is the main task in which the operating system
needs to make sure that CPU remains as busy as possible. It can range from 0 to
100 percent. However, for the RTOS, it can be range from 40 percent for low-
level and 90 percent for the high-level system.
o Throughput: The number of processes that finish their execution per unit time is
known Throughput. So, when the CPU is busy executing the process, at that time,
work is being done, and the work completed per unit time is called Throughput.
 Minimize:
o Waiting time: Waiting time is an amount that specific process needs to wait in the
ready queue.
o Response time: It is an amount to time in which the request was submitted until
the first response is produced.
o Turnaround Time: Turnaround time is an amount of time to execute a specific
process. It is the calculation of the total time spent waiting to get into the memory,

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
waiting in the queue and, executing on the CPU. The period between the time of
process submission to the completion time is the turnaround time.

Interval Timer

 Timer interruption is a method that is closely related to preemption. When a certain


process gets the CPU allocation, a timer may be set to a specified interval. Both timer
interruption and preemption force a process to return the CPU before its CPU burst is
complete.
 Most of the multi-programmed operating system uses some form of a timer to prevent a
process from tying up the system forever.

What is Dispatcher?

 It is a module that provides control of the CPU to the process. The Dispatcher should be
fast so that it can run on every context switch. Dispatch latency is the amount of time
needed by the CPU scheduler to stop one process and start another.
 Functions performed by Dispatcher:
o Context Switching
o Switching to user mode
o Moving to the correct location in the newly loaded program.

Types of CPU scheduling Algorithm


There are mainly six types of process scheduling algorithms

 First Come First Serve (FCFS)


 Shortest-Job-First (SJF) Scheduling
 Shortest Remaining Time
 Priority Scheduling
 Round Robin Scheduling
 Multilevel Queue Scheduling

TASK 2: In a group, create a program that will simulate and show the gantt
chart of each CPU scheduling Algorithm.

Notes:

 Choose your own programming language.


 One program, one CPU scheduling Algorithm or One program, 6 CPU scheduling
Algorithms (Use Selection to choose what CPU scheduling Algorithm to show)
 Also, display/ print the waiting time and turnaround time for each process
 Display Average Waiting time
 Display Average Turnaround Time
 Process may all arrive at time 0 or may arrive any time.

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 Each process has its burst time.
 Priority might be observed in some cases
 Time Slice might be observed in some cases
 Pre-emptive or Non pre-emptive might be observed in some cases
 You may add other features to make the program better
 Presentation of Task 2 output will be on April 30, 2021.
 Task 2 output will be recorded as your Interim.
o For the grading purposes, the Programing, Presentation and Group Activity
Rubrics will be used, see rubrics below.

Fill-out Short Documentation below:

 Members of the group with corresponding role/contribution in the development of the


activity

<Insert Here, Could be a table for better representation>

 Short description about each CPU scheduling Algorithm

<Insert Here, bulleted format will do>

 Short description about the program

<Insert Here, Paragraph Entry>

 Detailed explanation of each CPU scheduling Algorithm based on the created program

<Insert Here, Paragraph Entry>

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
<Explain>

 Programming Codes

<Insert Here, Smaller Font Size and Style will do >

 Sample and explanation for each CPU scheduling Algorithm while running the created
program

<Insert here, with screen shot >

LESSON SUMMARY

 A process is a program in execution, and the status of the current activity of a process is
represented by the program counter, as well as other registers.
 The layout of a process in memory is represented by four different sections: (1) text, (2)
data, (3) heap, and (4) stack.
 As a process executes, it changes state. There are four general states of a process: (1)
ready, (2) running, (3) waiting, and (4) terminated.

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 A process control block (PCB) is the kernel data structure that represents a process in an
operating system.
 The role of the process scheduler is to select an available process to run on a CPU.
 An operating system performs a context switch when it switches from running one
process to running another
 The fork() and CreateProcess() system calls are used to create processes on UNIX and
Windows systems, respectively.
 When shared memory is used for communication between processes, two (or more)
processes share the same region of memory. POSIX provides an API for shared memory.
 Two processes may communicate by exchanging messages with one another using
message passing. The Mach operating system uses message passing as its primary form
of interprocess communication. Windows provides a form of message passing as well.
 A pipe provides a conduit for two processes to communicate. There are two forms of
pipes, ordinary and named. Ordinary pipes are designed for communication between
processes that have a parent-child relationship.
 Named pipes are more general and allow several processes to communicate.
 UNIX systems provide ordinary pipes through the pipe() system call.
 Ordinary pipes have a read end and a write end. A parent process can, for example, send
data to the pipe using its write end, and the child process can read it from its read end.
Named pipes in UNIX are termed FIFOs.
 Windows systems also provide two forms of pipes—anonymous and named pipes.
Anonymous pipes are similar to UNIX ordinary pipes. They are unidirectional and
employ parent-child relationships between the communicating processes. Named pipes
offer a richer form of interprocess communication than the UNIX counterpart, FIFOs.
 Two common forms of client-server communications are sockets and remote procedure
calls (RPCs). Sockets allow two processes on different machines to communicate over a
network. RPCs abstract the concept of function (procedure) calls in such a way that a
function can be invoked on another process that may reside on a separate computer.
 The Android operating system uses RPCs as a form of interprocess communication using
its binder framework.
 A thread represents a basic unit of CPU utilization, and threads belonging to the same
process share many of the process resources, including code and data.
 There are four primary benefits to multithreaded applications: (1) responsiveness, (2)
resource sharing, (3) economy, and (4) scalability.
 Concurrency exists when multiple threads are making progress, whereas parallelism
exists when multiple threads are making progress simultaneously. On a system with a
single CPU, the only concurrency is possible; parallelism requires a multicore system that
provides multiple CPUs.
 There are several challenges in designing multithreaded applications. They include
dividing and balancing the work, dividing the data between the different threads, and
identifying any data dependencies. Finally, multithreaded programs are especially
challenging to test and debug.
 Data parallelism distributes subsets of the same data across different computing cores and
performs the same operation on each core. Task parallelism distributes not data but tasks
across multiple cores. Each task is running a unique operation.

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 User applications create user-level threads, which must ultimately be mapped to kernel
threads to execute on a CPU. The many-to-one model maps many user-level threads to
one kernel thread. Other approaches include the one-to-one and many-to-many models.
 A thread library provides an API for creating and managing threads. Three common
thread libraries include Windows, Pthreads, and Java threading. Windows is for the
Windows system only, while Pthreads is available for POSIX-compatible systems such as
UNIX, Linux, and macOS. Java threads will run on any system that supports a Java
virtual machine.
 Implicit threading involves identifying tasks—not threads—and allowing languages or
API frameworks to create and manage threads. There are several approaches to implicit
threading, including thread pools, fork-join frameworks, and Grand Central Dispatch.
Implicit threading is becoming an increasingly common technique for programmers to
use in developing concurrent and parallel applications.
 Threads may be terminated using either asynchronous or deferred cancellation.
Asynchronous cancellation stops a thread immediately, even if it is in the middle of
performing an update. Deferred cancellation informs a thread that it should terminate but
allows the thread to terminate in an orderly fashion. In most circumstances, deferred
cancellation is preferred to asynchronous termination.
 Unlike many other operating systems, Linux does not distinguish between processes and
threads; instead, it refers to each as a task. The Linux clone() system call can be used to
create tasks that behave either more like processes or more like threads. CPU scheduling
is the task of selecting a waiting process from the ready queue and allocating the CPU to
it. The CPU is allocated to the selected process by the dispatcher.
 Scheduling algorithms may be either preemptive (where the CPU can be taken away from
a process) or non-preemptive (where a process must voluntarily relinquish control of the
CPU). Almost all modern operating systems are preemptive.
 Scheduling algorithms can be evaluated according to the following five criteria: (1) CPU
utilization, (2) throughput, (3) turnaround time, (4) waiting time, and (5) response time.
 First-come, first-served (FCFS) scheduling is the simplest scheduling algorithm, but it
can cause short processes to wait for very long processes.
 Shortest-job-first (SJF) scheduling is provably optimal, providing the shortest average
waiting time. Implementing SJF scheduling is difficult, however, because predicting the
length of the next CPU burst is difficult.
 Round-robin (RR) scheduling allocates the CPU to each process for a time quantum. If
the process does not relinquish the CPU before its time quantum expires, the process is
preempted, and another process is scheduled to run for a time quantum.
 Priority scheduling assigns each process a priority, and the CPU is allocated to the
process with the highest priority. Processes with the same priority can be scheduled in
FCFS order or using RR scheduling.
 Multilevel queue scheduling partitions processes into several separate queues arranged by
priority, and the scheduler executes the processes in the highest-priority queue. Different
scheduling algorithms may be used in each queue.
 Multilevel feedback queues are similar to multilevel queues, except that a process may
migrate between different queues.

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
 Multicore processors place one or more CPUs on the same physical chip, and each CPU
may have more than one hardware thread. From the perspective of the operating system,
each hardware thread appears to be a logical CPU.
 Load balancing on multicore systems equalizes loads between CPU cores, although
migrating threads between cores to balance loads may invalidate cache contents and
therefore may increase memory access times.
 Soft real-time scheduling gives priority to real-time tasks over non-real-time tasks. Hard
real-time scheduling provides timing guarantees for realtime tasks,
 Rate-monotonic real-time scheduling schedules periodic tasks using a static priority
policy with preemption.
 Earliest-deadline-first (EDF) scheduling assigns priorities according to the deadline. The
earlier the deadline, the higher the priority; the later the deadline, the lower the priority.
 Proportional share scheduling allocates T shares among all applications. If an application
is allocated N shares of time, it is ensured of having N∕T of the total processor time.
 Linux uses the completely fair scheduler (CFS), which assigns a proportion of CPU
processing time to each task. The proportion is based on the virtual runtime (vruntime)
value associated with each task.
 Windows scheduling uses a preemptive, 32-level priority scheme to determine the order
of thread scheduling.
 Solaris identifies six unique scheduling classes that are mapped to a global priority. CPU-
intensive threads are generally assigned lower priorities (and longer time quantum), and
I/O-bound threads are usually assigned higher priorities (with shorter time quantum.)
 Modeling and simulations can be used to evaluate a CPU scheduling algorithm.

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Rubrics for Grading, Whole Activity Sheet (To be averaged):

RUBRIC FOR STUDENT ACTIVITY SHEET

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
RUBRIC FOR ESSAY OR REFLECTION PAPER

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Rubric for Programming and Presentation, Interim (to be averaged):
Programming Rubric

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Presentation Rubric

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph
Group Activity Rubric

Reynaldo E. Castillo
pecastillo@nu-laguna.edu.ph

You might also like