Assignment 2 W Sol

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Reasons for processes to be ejected from CPU

 I/O Request
 Time allotted for the process to be executed is up
o Even if you have mutex lock
 Spawning child process
 Interrupt

1. Scheduler schedule
2. Dispatcher for context switching (unload or reload processes needed, appropriate program
counter, context switching)

Kernel threads are loaded to the cpu scheduler (they represent the processes)

1. CPU BURST cpu intensive cycle, just a few millisecond (very short time)
2. IO BURST io burst cycle, have longer burst compared to cpu burst (even in calculations,
there are io burst, move and store to registers are considered io)

They always alternate each other

Starvation processes are stuck at the ready queue

DATA STRUCTURE FOR READY QUEUE

 FIFO Queue no sense of priority, just a straight line, primitive


 Priority Queue heap data structure
 Tree used in linux (red black tree)
 Unordered Linked List
 Skip List like linked list but in multiple levels, improve performance with tasks
that has priorities

N! schedules are possible for N processes

METRICS/MEASURE TO ASSESS CPU SCHEDULER

 Turnaround Time loaded until completion


 Response Time loading into ready queue to loading into scheduler
 Deadline ratio of how well scheduler finishes a processes before their designated
deadline
 Predictability will the tasks run in the same amount of time despite the number of
processes
 Throughput number of processes executed per unit time
 CPU Utilization how often is CPU utilization used or is idle
 Fairness how long is a process in a ready queue before it goes to starvation
 Prioritization how scheduler handles priority
 Balancing
 Waiting time how long were you in the ready queue
1. Throughput
2. Ave Completion Time
3. Ave Waiting Time

TYPES OF SCHEDULER

1. Cooperative/Nonpreemptive

Tasks will not be forcefully ejected in the CPU, primitive

Disadvantage: may cause starvation if one process has a long execution time which will hoard
the cpu and its resources

2. Preemptive
Tasks are forcefully ejected in the CPU

ASSUMPTION

Multiple tasks

We know how long the tasks will be executed

None will be force ejected

Only 1 CPU

FCFS

SJF

SJF+PREEMPTION

PRIORITY SCHEDULING

 Priority takes precedence (even in length)


 40 level of priority in user (linux)
 100 level of priority in kernel (linux)
 How to implement priority in ready queue?
o Multilevel priority
 Promote those who are waiting for so long in the ready queue
 But no demotion system for the task that takes too long to execute
 Priority Aging
o Prevent starvation
o Increase priority of task that are waiting for a long time in the ready queue
 Priority Inversion
o If you have a lock(mutex) you have a bonus/boost on priority and if you release the
lock you lose the bonus/boost priority

Round Robin (RR)

 FCFS with timeslice (ejected after expiration of timeslice)


o RR with infinite timeslice is FCFS
 Advantage: IO yielding to improve performance
 RULE OF THUMB FOR TIMESLICE
o Timeslice >> time of context switch , so that majority of time used by cpu will be on
processes rather on context switch
o IO intensive task higher priority
CPU intensive task lower priority
How to determine if IO or CPU intensive?
If it yields because of IO: keep in current priority or
promote one level
If uses up all timeslice and is not done yet: lower priority
If low prio task yields: boost priority or keep priority

MULTI-LEVEL FEEDBACK QUEUE (MLFQ)

 Best scheduler (because you can mix and match)but is almost impossible to be implemented
due to the many variables to be considered

MULTIPROCESSOR SCHEDULING

 Contention for Ready Queue Lock


o Each processor has its own ready queue, each ready queue has own lock, which may
cause contention and mag agawan which will slow down the performance
 Cache Coherence Overheard
o What if the instruction was reloaded in a different processor, that cache which has that
instruction will slow down the execution time, and will have a different cache
assignment which will nullify the use of cache
 Limited Cache Reuse

O(1) SCHEDULER

 Before 2.216.2.3 version


 easier to add task (+O(1) time per task)
 preemptive and priority based
 0-99 OS Processes (real time task)
100-139 USER LEVEL TASKS (40 levels)
Default: 120
Nice Value - mapping value, level of kindness of user priority -20(139), 19(100)
 Timeslice value
higher prio smaller timeslice

 Feedback system
longer idle boost prio by -5
lower sleeptime demote prio by +5
PROBLEM: locks and what if the process is just idle and does nothing and you keep
promoting its priority?

COMPLETELY FAIR SCHEDULER (CFS)

 Uses Red Black Tree

LECTURE 8

MUTEX COMPLICATIONS

 The process who has the lock may terminate


 Deadlock

HOW TO PREVENT OTHER PROCESSES TO INTERRUPT ANOTHER PROCESS EXECUTING IN ITS CRITICAL
CONDITION?

1. Change scheduler to nonpreemptive


2. Create a locking mechanism where you disable interrupts
3. Peterson’s solution
There are only 2 threads
Flag array that tells the system or those who uses the shared resource who is interested in
using the resource (1 interested, 0 if not or is done)
Turn whose turn to use the shared resource

Limitation:
Impl 1 hassle if 3 or more threads wants to use, complexity in while loop and array of flag
Impl 2 most of CPU time is used in waiting, locking mechanism doesn’t work cause multiple
threads wants to acquire the lock at the same time and is exposed to interrupt
Solution: must be implemented as 1 instruction/line of code
4. Atomic Instruction
They are not interruptible during their execution
Imp 1: test and set
You try to lock the value and then check the old state if it is already lock and if it is then
the code will not proceed (will be stuck at the while loop)
Problem: you keep checking the state of lock, possible that threads may starve(no
fairness mechanism in acquiring lock)
Useful only if the critical section is short

Imp 2: compare and swap

You take actual then compare to desired, if it is the same set the lock to the actual value
It’s not as wasteful as test and set but is till prone to busy waiting and still does not have
a concept of queuing

Imp 3: fetch and add

Fetches the value of lock then adds 1 to it but returns the old value

May limit the checking of lock state

V2 ticketing system analogy, solves the problem that the process may not get a
chance to acquire the lock but busy waiting is still present

Test and set with yield

If lock is still present and the process doesn’t urgently needs the lock may go back to
ready queue which solves the busy waiting but the problem is context switching, this is
only appropriate for small number of threads.

Soln: have an implementation where the process can sleep (which will not make it go
back to ready queue and will not also waste cycle in busy waiting) and just invoke a
wake mechanism if the process is needed

flaw is that you can fall asleep while you have the lock (deadlock)

soln: setsleep(); checks if it’s okay to sleep first before going to sleep

TIP: don’t let thread fall asleep if you are in critical section

PRODUCER- CONSUMER

Producer produces into a shared data, consumer eats the data from the shared buffer

Example: Sockets, pipes fifos, printer

Nested locks -> possible deadlock

Mutex implementation: less chance of error

Semaphore: more logical looking

Variables to be locked: shared buffer and counter

READER-WRITER

Shared resource, readers look at the contents, writer edits the contents of the resource. Readers
can’t read when writers write

Example: Handling of regular files

SEMAPORE

wait(s) / sem_wait()

Check if zero or nonzero, if nonzero decrement counter, if zero make the process
sleep/wait
signal(s,TID)

if there are threads waiting, let one thread go, if there are none increment counter

HOW TO SCHEDULE THREADS

Schedule along with threads

Thread is inside the process

Volatile prevents compiler optimization, code will not be reordered during compilation

You might also like