Scheduling_algorithm_for_bidirectional_LPT

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/375697755

Scheduling algorithm for bidirectional LPT

Preprint · November 2023


DOI: 10.13140/RG.2.2.28814.13126

CITATIONS READS
0 127

4 authors, including:

Davoud Yousefi Shishehgaran


Ardabil University of Medical Sciences
5 PUBLICATIONS 4 CITATIONS

SEE PROFILE

All content following this page was uploaded by Davoud Yousefi Shishehgaran on 17 November 2023.

The user has requested enhancement of the downloaded file.


Scheduling algorithm for bidirectional LPT
Ali Hozouri1, Mehdi EffatParvar2, Davoud Yousefi 3, Abbas Mirzaei 4
1-Master's student, Department of Computer Engineering, Islamic Azad University, Ardabil branch, Ardabil, Iran
Email: a.hozoori@iauardabil.ac.ir

2- Department of Computer Engineering, Islamic Azad University, Ardabil Branch, Ardabil, Iran
Email: me.effatparvar@gmail.com

3- Department of Computer Engineering, Moghadas Ardabili Institute of Higher Education, Ardabil, Iran
Email: d.yousefi.sh@gmail.com

4- Department of Computer Engineering, Islamic Azad University, Ardabil Branch, Ardabil, Iran
Email: a.mirzaei@iaut.ac.ir

Abstract:

All activities and procedures performed by the computer system are conducted through the
processor, which is regarded as the central and most significant component of the device. The
central processing unit must be utilized effectively for a computer to manage numerous programs
at once. Scheduling refers to the process of allocating jobs or processes to the CPU. Scheduling
decides which processes are assigned to which processors and displays how resources are
distributed among them. Its main objectives are to maximize CPU usage, boost throughput, and
reduce waiting and reaction times. Algorithms are used to efficiently and effectively schedule
processes on the processor and to decide the order in which different operating system processes
should be executed. This algorithm's goal is to maximize system resource usage and boost system
performance. The scheduling mechanism divides the processor among various processes, giving
each one a set time to run. The waiting queue's processes are executed in the order determined by
this algorithm. In this article, bidirectional LPT (improved LPT) scheduling algorithm has been
devised and introduced. One of this algorithm's benefits is that it has a higher throughput and
shorter waiting times than LPT.

Keywords: scheduling, scheduling algorithm, LPT, bidirectional LPT, CPU scheduling

1. Introduction

An operating system's primary function is scheduling. The scheduling techniques used by the
central processing unit (CPU) have a significant impact on how well it performs since they govern
how resources are utilized. For juggling many jobs on the CPU, there are numerous algorithms. In
order to increase throughput and decrease undesirable elements like waiting time, scheduling's
main objective is to verify parity between processes in the ready queue [1-3].

The Process Control Block (PCB) is the designated block where process attributes and states are
stored. The operating system uses a few CPU and PCB scheduling methods to manage process
activity and scheduling [4-6]. A lot of processes have concurrent access to main memory in the
multi-processing situation. Therefore, effective scheduling algorithms are required to control all
operations and system performance. Different processor scheduling techniques are appropriate for
various situations, including real-time systems, multitasking systems, etc. Several well-known
processor scheduling techniques include [7,8]:

When a user wants to run a process, it joins the group of processes that have been approved or
suspended by the long-term scheduler. The long-term scheduler decides which processes should
be maintained ready to receive a queue. The scheduler controls the tasks that must operate on a
system as a result, and it also demands that the degree of complexity be handled continuously.
Sometimes, during mid-term scheduling, the mid-term scheduler switches primary memory
processes from primary memory to secondary memory, such as a storage device. This is known as
swapping or switching processes. Furthermore, they are incorrectly referred to as processes that
go in or out. A short-term scheduler, often known as the CPU scheduler, determines which
programs should be performed on the CPU. The degree of multiprogramming is also adjusted in
order to determine how many processes are running in the main memory [9].

A program that forcibly ends an active process on the CPU is known as preemptive scheduling.
When the scheduler is unable to forcibly remove processes off the CPU, it may be non-preemptive
[10].

This article's goals are to clarify the fundamental approaches to CPU scheduling and to present a
fresh approach for improving CPU scheduling. Different criteria should be taken into account
when choosing the optimal algorithm for the necessary operations and systems and comparing
them. The objectives of scheduling algorithms, CPU scheduling requirements, methods, and
varieties of scheduling algorithms, as well as the newly developed method, are covered in the
sections that follow [11].

2. Criteria for scheduling algorithms

There are numerous scheduling algorithms, and each should have its effectiveness assessed using
a few different criteria. Each has unique characteristics as well. To evaluate different CPU
scheduling techniques, many benchmarks have been created; a few of them are given below [12]:

1) Increase CPU usage: keep it active or busy at all times.


2) Throughput: the volume of work finished in a certain period of time.
3) Burst time: the length of time needed for the CPU to run a process, or the process's
execution time.
4) Completion moment: the moment the execution procedure is complete.
5) Spin Time: The overall period of time the process spends in main memory.
6) Wait Time: The total period of time a process waits in a queue before a CPU finally decides
to execute it.
7) Response Time: The amount of time it takes the process to deliver the first response.
8) Equity: making sure that every process gets a fair amount of the CPU.
9) Maximum execution time of all tasks (TFT): This scheduling criterion for multi-processor
systems measures the longest possible execution time for all tasks [13-17].

3. CPU scheduling techniques


Techniques for scheduling that are preemptive or non-preemptive are frequently employed.

3.1. Exclusive, non-preemptive scheduling: When a processor is taken over by a process, it


stays that way until the process is finished or an I/O activity takes place. In other words,
by terminating or moving to a waiting state, the operating process frees up the CPU [18].

3.2.Non-exclusive, preemptive scheduling: After giving a processor to a process, the scheduler


has the ability to reclaim the processor against the wishes of the process, in accordance
with the scheduling methodology. In non-exclusive scheduling, the scheduler gives the
process a temporary CPU allocation, but if an interrupt happens, it stops the process from
running. The running process must unintentionally give CPU to the high-priority process
when a high-priority job enters the ready queue (even if the low-priority task is still
executing) [19].

4. CPU scheduling algorithms


Scheduling describes how executable tasks are chosen when there are numerous of them.
Different metrics can be used to compare these timing methods. These requirements
include things like throughput, waiting times, and reaction times. To put it another way,
CPU scheduling is the procedure for determining which task in the queue will receive CPU
allocation [20].

4.1.Run in order of arrival (FCFS): This algorithm for scheduling is non-preemptive. This method
of queuing is known as a "FIFO" technique because, as the name implies, the CPU is
allocated to the process that comes first. Because of the numerous problems with this kind,
if the first process is entered and it takes too long, succeeding processes with a succession
of shorter entry times will be forced to wait for a very long period. The typical wait time
will therefore lengthen. Up to the completion of I/O operations, the application does not
leave the CPU [21].

4.2.Shortest task or processing first (SPT or SJF): In this scheduling method, the process that
has the quickest burst or execution time among the receiving processes is chosen. Because
there is a minimum amount of waiting and rotating. Comparatively speaking, this
scheduling approach is superior to the FCFS algorithm. The average turnaround time and
waiting time will be short if jobs are logged in concurrently. Another example of a non-
preemptive algorithm is this one. The necessity to know the processing times of each step
is one of the issues with the SJF algorithm. In most cases, we don't know how long a process
will take to complete. This calls for an estimation of processing time. Finally, this type's
greatest weakness is hunger [22].

4.3. Longest job or processing first (LJF or LPT): This kind of method is non-preemptive and
the opposite of the SJF (or SPT) technique. Different from short-term processes, it operates
differently and gives them priority. The key benefit is that it is simpler to calculate long
work than short work. The average waiting time and response time are maximized in this
algorithm, which has a starving problem for little jobs [23].

4.4. Round Robin (RR): RR is a form of preemptive CPU scheduling technique that allots a
time period known as quantum time (QT). The active process continues work and moves
to the end of the ready queue when QT expires. RR is frequently used on real-time and
time-sharing platforms because it permits average time-sharing for each activity,
maximizes CPU utilization, and offers quick response. The RR technique also has a number
of drawbacks, such as low throughput, lengthy round times, and lengthy waiting times. The
key element of the RR algorithm is quantum time. Low QT, on the other hand, results in
numerous background updates, which reduces CPU performance. However, using a large
QT can impair FCFS and cause reaction times to lag [24].

4.5.Priority scheduling for preventive and non-preventive methods: Based on the kind of data
being processed, this scheduling method classifies the operations. Any process that adds
itself to the ready queue is given priority based on "importance" according to a series of
activities that indicate how important it is. Only such a priority number controls which
process obtains the CPU allocation, ensuring that the "highest value" priority is transmitted
to the CPU either first or last. This algorithm uses both preemptive and non-preemptive
approaches. Low-priority processes may starve if a high-priority process repeatedly enters
the ready queue, which is the preemptive variant of this type of approach [25].

4.6.Multilevel Queuing (MLQ) Algorithm: Processes can be split up into distinct portions
based on their location. For instance, processes can be divided into three categories: system
processes, foreground processes, and background processes. The system process, the
foreground process, and the background process are prioritized among them, with the
system process being given the highest priority. In these process-related scenarios, there
are various demands and time restrictions. As a result, the ready queue is separated into
several queues, each of which has its own timing mechanism. Round-robin scheduling, for
instance, will be used for one queue while FCFS scheduling would be used for the other
queue. In the ready queue, high priority activities are frequently put at the front while lower
priority processes are put at the bottom. The lowest processes in the ready queue starve
when this tactic is employed. In Figure 2, the state diagram for "multilevel queuing (MLQ)
scheduling" is displayed [26-28].
System Process Highest priority

Ready queue Foreground process

Background process Lowest priority

Figure 1. Multi-level queue.

5. Bilateral LPT method:

In the LPT method, a decent TFT is acquired, but because processors are assigned to the greatest
jobs in descending order, waiting times and throughput alter, which is not what is desired. In order
to reduce and minimize the waiting time and throughput flaws of the LPT algorithm, we presented
the bilateral LPT approach. In this technique, according to Algorithm (1), the jobs already in a
queue are first arranged in ascending order, and then in the following steps, they are taken from
the start and end of the queue, respectively, and put in the processors in descending order,
depending on the number of processors. We set jobs in these processors from left to right in
descending order if the processors are empty (at the beginning of the schedule) or if processors
have finished their prior work simultaneously. The remaining jobs will be assigned to the
processors in decreasing order, just as in the earlier phases, if the number of tasks in the queue at
the end is not a multiple of the number of processors. Next, we discuss the new proposed algorithm,
bilateral LPT [29]:

Algorithm (1): Algorithm related to bilateral LPT.

1. We create a queue (S) and order the processing(s) that are accessible in ascending order.

2. In order of left to right, name the current processors (mi).

3. Up until the tasks in the queue are accomplished, we repeat the following steps:

1) Depending on how many processors we have (m), we begin accepting jobs at the front of
the queue (S)[30].
2) As we examine the processors from left to right, we assign the jobs we eliminated in step
(1) to the idle processor that just finished its prior task in descending order (LPT). This
continues as long as the tasks that were taken out of the queue in step 1 are completed [31].
3) Using all of the available processors, we start at the end of the queue (S).
4) We check the processors from left to right and assign the jobs we eliminated in step (3) to
the processor that finished its previous task earlier and is free. The tasks are assigned in
descending order (the largest task first). We do this until the job removed from the queue
in step (3) is finished [32,33].

❖ The tasks are placed in these processors in descending order from left to right if
the processors are empty (at the beginning of the schedule) or if the processors
have finished their prior work simultaneously.
❖ If, at the end, there are more tasks in the queue than there are processors, we
arrange the remaining jobs in the processors in descending order, just as we did in
the previous steps [34,35].

We used a 3, 4, and 5 processor system to apply the bilateral LPT approach to a variety of tasks,
including those with a steep ascent or fall slope and those with a gentle and low slope. The findings
of our experiment, which were published in Table 1, are now in position.

Bidirectional
Algorithm LPT
LPT

Number of
3 4 5 3 4 5
processors

Maximum Less than Less than Less than


execution - - - 2% 2% increase 2%
time of all increase increase
tasks (TFT)
Less than Less than Less than
The amount
2% 2% 2%
of - - -
reduction reduction reduction
parallelization

More More than More


than 65% 40% than 30%
Throughput - - -
increase increase increase
More More than More
than 20% 20% than 20%
waiting time - - -
reduction reduction reduction

Table 1. Comparison of bilateral LPT and LPT.

Table 1 shows that the operating capacity and waiting time have greatly improved, but the quantity
of parallelization and TFT have decreased by less than 2%, which may be disregarded due to their
modest value.

6. Conclusion

Processor scheduling is used to gradually assign tasks to the processor in order to achieve system
goals, including reaction time, throughput, and processor efficiency. The foundation of multi-
processor and multi-program operating systems is processor scheduling.
Every processor scheduling technique has advantages and disadvantages of its own; thus, the best
algorithm should be chosen based on the requirements and constraints of the system. Processor
scheduling methods should be designed so that the execution of processes is delayed as little as
possible, numerous processes can be executed concurrently, and system performance is increased
while the execution of processes is delayed as little as possible.

Scheduling, in general, specifies how resources are distributed among processors and which
process is assigned to which CPU. The operating system chooses which task will be done first
when the system has more tasks than it can handle. To shorten the TFT, numerous scheduling
strategies are employed. In fact, the objectives of the system have been achieved, to the extent that
we may offer scheduling algorithms with less TFT.

The operating system's processor scheduling mechanism is used to choose the order in which
different processes will run. This method aims to improve the performance of the system and the
efficient use of system resources. According to the processor scheduling algorithm, the processor
is split up among various tasks, and each one is given a set window of time during which to run.
The processing waiting queue's processes are executed in the order determined by this algorithm.

The bilateral LPT scheduling method (enhanced LPT) has been designed and introduced in this
article. Table 1 shows that in the newly designed algorithm, the waiting time and throughput are
greatly increased, but the quantity of parallelism and TFT is about the same. Due to the modest
amount, their 2% decline can be disregarded. Each processor's scheduling technique has pros and
cons of its own; thus, the best algorithm should be chosen based on the system's requirements and
constraints.

In order to achieve better throughput and parallelization, it is suggested that future research
combine other scheduling algorithms, such as SPT and LPT, upgrade the SPT algorithm, or remove
the restriction on the number of processors in bilateral LPT. Time-waiting and TFT are less well
tested and designed.
References

[1] Bharathi, S., Mp, C., & Sn, D. (2022). Comprehensive Analysis OF CPU Scheduling Algorithms. Int Res
J Moderniz Eng Technol Sci, 4(9), 180-185.
[2] S. B. Bandarupalli, N. P. Nutulapati and P. S. Varma, "A novel ‘CPU’ Scheduling Algorithm, Preemptive & Non-
Preemptive," International Journal of Modern Engineering Research (“IJMER”), vol. 2, no. 6, pp. 4484-4490,
Nov- Dec 2012.
[3] Omar, Hoger K., Kamal H. Jihad, and Shalau F. Hussein. "Comparative analysis of the essential CPU
scheduling algorithms." Bulletin of Electrical Engineering and Informatics 10.5 (2021): 2742-2750.
[4] Ali, Shahad M., et al. "A Review on the CPU Scheduling Algorithms: Comparative Study." International
Journal of Computer Science & Network Security 21.1 (2021): 19-26.
[5] Mirzaei, A. (2021). QoS-aware Resource Allocation for Live Streaming in Edge-Clouds Aided HetNets
Using Stochastic Network Calculus.
[6] Mohammad Zadeh, M., & Mirzaei Somarin, A. (2017). Attack Detection in Mobile Ad Hoc.
[7] Mirzaei, A., & Najafi Souha, A. (2021). Towards optimal configuration in MEC Neural networks: deep
learning-based optimal resource allocation. Wireless Personal Communications, 121, 221-243.
[8] Mirzaei, A., Zandiyan, S., & Ziaeddini, A. (2021). Cooperative virtual connectivity control in uplink small
cell network: towards optimal resource allocation. Wireless Personal Communications, 1-25.
[9] Mirzaei Somarin, A., Barari, M., & Zarrabi, H. (2018). Big data based self-optimization networking in next
generation mobile networks. Wireless Personal Communications, 101, 1499-1518.
[10] NOSRATIP, M., HOSEINIP, M., SHIRMARZP, A., SOMARINP, A. M., HOSEININIAP, N., BARARIP, M., & Ardebil,
I. (2016). Application of MLP and RBF Methods in Prediction of Travelling within the city. Bulletin de la Société
Royale des Sciences de Liège, 85, 1392-1396.
[11] Hosseinalipour, A., KeyKhosravi, D., & Somarin, A. M. (2010, April). New hierarchical routing protocol for
WSNs. In 2010 Second International Conference on Computer and Network Technology (pp. 269-272). IEEE.
[12] Li, X., Lan, X., Mirzaei, A., & Bonab, M. J. A. (2022). Reliability and robust resource allocation for Cache-
enabled HetNets: QoS-aware mobile edge computing. Reliability Engineering & System Safety, 220, 108272.
[13] Javid, S., & Mirzaei, A. (2021). Presenting a reliable routing approach in iot healthcare using the
multiobjective-based multiagent approach. Wireless Communications and Mobile Computing, 2021, 1-20.
[14] Mirzaei, A. (2022). A novel approach to QoS‐aware resource allocation in NOMA cellular HetNets using
multi‐layer optimization. Concurrency and Computation: Practice and Experience, 34(21), e7068.
[15] Mirzaei, A., Barari, M., & Zarrabi, H. (2019). Efficient resource management for non-orthogonal multiple
access: A novel approach towards green hetnets. Intelligent Data Analysis, 23(2), 425-447.
[16] Jahandideh, Y., & Mirzaei, A. (2021). Allocating duplicate copies for IoT data in cloud computing based on
harmony search algorithm. IETE Journal of Research, 1-14.
[17] Narimani, Y., Zeinali, E., & Mirzaei, A. (2022). QoS-aware resource allocation and fault tolerant operation
in hybrid SDN using stochastic network calculus. Physical Communication, 53, 101709.
[18] Mirzaei, A., & Rahimi, A. (2019). A Novel Approach for Cluster Self-Optimization Using Big Data Analytics.
Information Systems & Telecommunication, 50.
[19] Duan, H., & Mirzaei, A. (2023). Adaptive Rate Maximization and Hierarchical Resource Management for
Underlay Spectrum Sharing NOMA HetNets with Hybrid Power Supplies. Mobile Networks and Applications, 1-
17.
[20] Mirzaei, A. (2022). Detecting Human Activities Based on Motion Sensors in IOT Using Deep Learning.
Nashriyyah-i Muhandisi-i Barq va Muhandisi-i Kampyutar-i Iran, 92(4), 313.
[21] Rad, K. J., & Mirzaei, A. (2022). Hierarchical capacity management and load balancing for HetNets using
multi-layer optimisation methods. International Journal of Ad Hoc and Ubiquitous Computing, 41(1), 44-57.
[22] Sajed, M., Jahanbakhsh, S., & Mirzaei, A. Diagnosis and Classification of Speech of People via Speech
Processing Methods and Feed Forward Multilayer Perceptron Neural Network.
[23] Esmaeili, M., Samarin, A. M., & Bahrami, M. The evaluation of software architecture styles.
[24] Esmaeili, M., Samarin, A. M., & EffatParvar, M. Predicting Reliability in Design Hybrid Cars.
[25] PARVAR, M. E., SOMARIN, A. M., TAHERNEZHAD, M. R., & ALAEI, Y. (2015). Proposing a new method for
routing improvement in wireless ad hoc networks (optional). Fen Bilimleri Dergisi (CFD), 36(4).
[26] Somarin, A. M., Barari, M., Ashrafi, S., Gudakahriz, S. J., & Tahernezhad, M. R. (2016). The Caspian Sea
Journal. Resource, 10(1 Supplement 4), 478-482.
[27] Mirzaei, A., Barari, M., & Zarrabi, H. (2017). An Optimal Load Balanced Resource Allocation Scheme for
Heterogeneous Wireless Networks based on Big Data Technology. International Journal of Advanced Computer
Science and Applications, 8(11).
[28] Karimzade, S., & Mirzaei, A. A New Approach to Improve Security in Smart Homes, Methods and Challenges.
[29] Nokhostin, P., Mirzaei, A., & Jahanbakhsh, S. Proposed Methods for Establishing Load Balancing in Fog
Computing: A Survey.
[30] Jamalpour, M., Shaddel, M., Somarin, A. M., & Razzaghzadeh, S. An overview of the Internet of Things in the
Health Care and Care of Patients (Algorithm, Challenges, Applications, Benefits).
[31] Mirzaei, A., & Zandiyan, S. (2023). A Novel Approach for Establishing Connectivity in Partitioned Mobile
Sensor Networks Using Beamforming Techniques. arXiv preprint arXiv:2308.04797.
[32] Hozouri, A., Mirzaei, A., RazaghZadeh, S., & Yousefi, D. (2023). An overview of VANET vehicular networks.
arXiv preprint arXiv:2309.06555.
[33] Somarin, A. M., Nosrati, M., Barari, M., & Zarrabi, H. (2016). A new Joint Radio Resource Management
scheme in heterogeneous wireless networks based on handover. Bulletin de la Société Royale des Sciences de
Liège.
[34] Ziaeddini, A., Mohajer, A., Yousefi, D., Mirzaei, A., & Gonglee, S. (2022). An optimized multi-layer
resource management in mobile edge computing networks: a joint computation offloading and caching
solution. arXiv preprint arXiv:2211.15487.
[35] Yousefi, D., Yari, H., Osouli, F., Ebrahimi, M., Esmalifalak, S., Johari, M., ... & Mirzapour, R. Energy
Efficient Computation Offloading and Virtual Connection Control in. learning (DL), 44, 43.

View publication stats

You might also like