CPU Scheduling

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Introduction to CPU Scheduling

CPU scheduling is like the traffic cop of your computer's brain.


It's in charge of deciding which tasks waiting in line should get
to use the CPU next. This whole process is super important for
keeping your computer running smoothly and making sure it's
using its resources efficiently.

The main goal here is to make sure no task has to wait around
too long for its turn on the CPU. That waiting time can really
slow things down and make your computer feel sluggish. So,
the CPU scheduler's job is to pick the next task to run based on
stuff like how important it is, how long it's been waiting, and
other rules the system follows.

By doing this well, CPU scheduling helps your computer handle


more tasks without getting bogged down. It's like making sure
everyone gets their fair share of time on the road, so traffic
keeps moving and nobody gets stuck in a jam. Basically, it's all
about keeping things running smoothly and making sure your
computer stays as fast and efficient as possible.
Types of CPU Scheduling
CPU scheduling algorithms can be classified into three main types: preemptive, non-preemptive, and hybrid.

Preemptive scheduling means that a running process can be interrupted and moved out of the CPU before it has
completed its task. This interruption allows other processes with higher priority or more urgent needs to take over the
CPU's resources. Think of it like someone barging in line at the grocery store because they have fewer items or an
emergency. The currently executing process may have to yield the CPU temporarily, but it can regain control later to
finish its job.

On the other hand, non-preemptive scheduling doesn't allow a process to be interrupted once it has started running on
the CPU. The process keeps the CPU until it voluntarily relinquishes control, typically after completing its CPU burst or
entering an I/O operation. It's like someone who gets in line at the store and insists on staying until they've checked out,
even if another customer comes with just one item.

Now, hybrid scheduling takes elements from both preemptive and non-preemptive approaches. It allows processes to
be interrupted under certain conditions, but also permits them to run to completion without interruption if needed.
This approach tries to strike a balance between responsiveness and efficiency, accommodating both short-term
priorities and long-term resource utilization.
Each type of scheduling algorithm has its own advantages and disadvantages, and the choice between them depends on
the specific requirements and goals of the operating system and the applications it supports.
Preemptive Scheduling Algorithms
Common preemptive scheduling algorithms include Round
Robin, Shortest Remaining Time First (SRTF), and Priority
Scheduling.

Round Robin (RR) is like giving every process a fair shot at


the CPU. It works by dividing time into small chunks called
time slices or quantum. Each process gets to run for a
fixed amount of time, called a time slice, before being put
back at the end of the line. It's like everyone getting a
turn on the swing at the playground—no one gets to hog
it for too long.

Shortest Remaining Time First (SRTF), on the other hand,


is all about being super efficient. It looks at how much
time each process has left to run and picks the one with
the shortest remaining time. This way, processes that are
almost done get to finish up quickly, which can help
reduce overall waiting time and keep things moving
smoothly. It's like picking the shortest line at the grocery
store because you know you'll be out of there in no time.
Non-Preemptive Scheduling Algorithms
Alright, so non-preemptive scheduling is when you let
a process run without any interruptions once it's
started.

First Come First Serve (FCFS) is pretty


straightforward. It's like saying whoever shows up first
gets to go first. So, if you've been waiting the longest
in line, you're the lucky one who gets to use the CPU
next.

Shortest Job First (SJF) is all about being efficient. It


looks at how long each task needs to run and picks
the quickest one to do next. This way, you're getting
the shortest tasks out of the way first, which should
help speed things up overall. It's like tackling the
easiest chores first so you can breeze through your
to-do list.
Priority Scheduling Algorithm
Priority Scheduling assigns a priority value to each
process, determining the order in which they access
CPU resources. The process with the highest
priority executes first, ensuring critical tasks are
addressed promptly. This approach can be
implemented as either preemptive, allowing high-
priority tasks to interrupt lower-priority ones, or non-
preemptive, where the CPU is not taken away from
a running task.

However, there's a risk with this method. Low-


priority processes may suffer from starvation, a
situation where they're continuously bypassed by
higher-priority tasks and thus never get to execute.
It's akin to a low-priority process being indefinitely
stuck in a queue behind more urgent ones. So, while
Priority Scheduling is effective for prioritizing
important tasks, it necessitates careful management
to prevent lower-priority tasks from being neglected.
Round Robin Scheduling Algorithm

Round Robin is like a time-sharing system for


processes, where each one gets a fair shot at the CPU.
Imagine a carousel where processes take turns riding.
They're arranged in a circle, and each process gets a
fixed amount of time on the CPU before it's sent to
the back of the line. This way, no process gets to hog
the CPU for too long, promoting fairness.

Now, while Round Robin is simple to set up and


understand, it's not without its drawbacks. Because
every process gets the same time slice, those with
long CPU bursts might have to wait a while before
they get another turn. It's like being in line for a ride
that everyone gets to enjoy for the same amount of
time—even if you've got a longer ride ahead, you'll
have to wait your turn again before you can finish.
Shortest Job First (SJF) Scheduling Algorithm

Shortest Job First (SJF) is like picking the shortest line


at the store checkout. It looks at all the tasks waiting
to be done and chooses the one that will take the
least amount of time to finish. This way, jobs get done
faster, and people don't have to wait as long.

Now, SJF is great for reducing the average waiting


time and the time it takes for tasks to complete, which
makes it a top performer in terms of efficiency. But
here's the tricky part: figuring out exactly how long
each task will take can be tough in real-life situations.
It's like trying to estimate how long it'll take to cook a
meal—you might have a general idea, but there are
always unexpected delays. So, while SJF is fantastic
when you can accurately predict task times, it can be
less effective if those predictions aren't reliable.
Multilevel Queue Scheduling
Multilevel Queue Scheduling divides the ready queue into multiple
queues with different priority levels.

Each queue may have its scheduling algorithm, allowing for better
management of different types of processes.

This approach is commonly used in systems with diverse process


requirements. Imagine the ready queue as a line at a theme park, but
instead of just one line, there are multiple lines based on the thrill level
of the rides. Multilevel Queue Scheduling is like organizing people into
different lines depending on how eager they are to ride.

Each line, or queue, has its own rules. For instance, the high-priority
line might have a fast-paced roller coaster, while the low-priority line
has a gentle carousel. Similarly, in computer terms, each queue might
have its own scheduling algorithm. This way, high-priority processes
get quick access to the CPU, while low-priority ones wait their turn
patiently.

This system is handy for managing different types of tasks efficiently.


Just like how a theme park accommodates visitors with different
preferences, Multilevel Queue Scheduling caters to diverse process
requirements in computer systems, ensuring everyone gets a fair shot
at CPU time.
Real-Time Scheduling
Think of real-time scheduling as being on a tight schedule, like
catching a train or meeting a deadline. In computer systems,
some tasks have to be done right on time, no excuses.

Hard real-time scheduling is like being absolutely strict about


these deadlines. It means that no matter what, tasks are
completed within their deadlines, even if it means delaying or
interrupting other tasks. It's like making sure you catch that
train, even if you have to drop everything else.

Soft real-time scheduling is a bit more forgiving. It still aims to


get things done on time, but it allows for some flexibility. So, if
there's a little delay, it's not the end of the world. It's like
aiming to finish your work by 5 pm, but if you're a little late, it's
okay as long as you get it done soon after.

These real-time scheduling methods are crucial for systems


where timing is everything, ensuring that tasks are completed
reliably and on schedule.
Comparison of Scheduling Algorithms
Scheduling algorithms can be evaluated based on criteria such as CPU utilization, throughput, waiting time, response time, and fairness.

No single scheduling algorithm is optimal for all scenarios, and the choice of algorithm depends on the system's requirements.

It is essential to consider the trade-offs between different criteria when selecting a scheduling algorithm for a particular system.
When it comes to picking the best scheduling algorithm, it's like choosing the right tool for the job. There are a bunch of factors to consider:

First up, CPU utilization. You want to make sure the CPU stays busy, doing as much work as possible.

Then there's throughput. This is all about how many tasks the system can handle in a given time. The higher, the better.

Next, waiting time and response time. Waiting time is how long a task waits in line before getting CPU time, while response time is how
long it takes for a task to start running once it's submitted. Lower is generally better for both.

And of course, fairness. You want all tasks to get a fair share of CPU time, so no one feels left out.

Now, here's the kicker: no single scheduling algorithm nails all these factors perfectly. It's all about trade-offs. You might pick an algorithm
that maximizes CPU utilization but sacrifices fairness a bit, or one that minimizes waiting time but doesn't handle high-throughput
situations as well.

So, when choosing a scheduling algorithm, you've got to weigh up these trade-offs and pick the one that best fits your system's needs. It's
like finding the right balance between speed, efficiency, and fairness to keep your system running smoothly.
Conclusion

CPU scheduling is pivotal for optimizing system


performance by efficiently allocating CPU
resources to processes. Various scheduling
algorithms cater to different system priorities,
such as fairness, throughput, or response time.
It's essential for system designers to
understand the characteristics and trade-offs of
these algorithms to develop efficient and
responsive systems. By selecting the most
suitable scheduling approach, system
performance can be finely tuned to meet
specific requirements, ensuring optimal
resource utilization and user satisfaction.
Thank You
Name: Achintya Tripathi
Roll no: 202210101110044
Subject: Operating system
Group: CS 42

You might also like