Download as pdf or txt
Download as pdf or txt
You are on page 1of 60

Real-Time

System

e
This pdf is only designed for B.Tech students of all Engineering Collage affiliated

ir
with Dr APJ Abdul Kalam Technical University.
This pdf provides help in the exam time for a quick revision in sorting the time.
es Compiled by

Sanjeev Yadav
D
u
Ed

Edu Desire
Computer & Technology

The More You Practice, The Better You Get.

Follow me

YouTube Facebook Instagram Telegram


1
DETAILED SYLLABUS

Unit Topic
1 Introduction
Definition, Typical Real-Time Applications: Digital Control, High-Level
Controls, Signal Processing etc., Release Times, Deadlines, and Timing
Constraints, Hard Real-Time Systems and Soft Real-Time Systems,
Reference Models for Real-Time Systems: Processors and Resources,
Temporal Parameters of Real-Time Workload, Periodic Task Model,
Precedence Constraints and Data Dependency.

2 Real-Time Scheduling

e
Common Approaches to Real-Time Scheduling: Clock Driven Approach,
Weighted Round Robin Approach, Priority Driven Approach, Dynamic
Versus Static Systems, Optimality of EffectiveDeadlineFirst (EDF) and

ir
Least-Slack-Time-First (LST) Algorithms, Rate Monotonic Algorithm,
Offline Versus Online Scheduling, Scheduling Aperiodic and Sporadic
jobs in Priority Driven and Clock Driven Systems.
es
3 Resources Sharing
Effect of Resource Contention and Resource Access Control (RAC),
Non-preemptive Critical Sections, Basic Priority-Inheritance and
Priority-Ceiling Protocols, Stack Based Priority-Ceiling Protocol, Use of
Priority-Ceiling Protocol in Dynamic Priority Systems, Preemption
D
Ceiling Protocol, Access Control in Multiple-Unit Resources, Controlling
Concurrent Accesses to Data Objects.

4 Real-Time Communication
Basic Concepts in Real-time Communication, Soft and Hard RT
u

Communication systems, Model of Real-Time Communication,


Priority-Based Service and Weighted Round-Robin Service Disciplines
for Switched Networks, Medium Access Control Protocols for Broadcast
Networks, Internet and Resource Reservation Protocols
Ed

5 Real-Time Operating Systems and Databases


Features of RTOS, Time Services, UNIX as RTOS, POSIX Issues,
Characteristic of Temporal data, Temporal Consistency, Concurrency
Control, Overview of Commercial Real-Time databases

YouTube Facebook Instagram Telegram


2
Unit-1
Introduction of Real-Time System

Definition:
● A real-time system is a type of computer system that is designed to
respond to input, process it, and provide output within a specified
time constraint.
● In other words, it is a system that must react to events and produce
results in real-time or near-real-time.
● These systems are often used in mission-critical applications, such

e
as aerospace, defence, and medical equipment, where a delay or
failure in processing could have severe consequences.

ir
es
D
u

Real-time systems can be classified into two types: Hard real-time


systems and soft real-time systems.
Ed

1. Hard real-time systems have strict timing constraints that must


be met, and a failure to meet these constraints can lead to
catastrophic consequences.

2. Soft real-time systems have timing constraints, but they are not
as strict, and a failure to meet them may not have severe
consequences.

Examples: Flight control systems, air traffic control systems, industrial


control systems, and multimedia systems.

YouTube Facebook Instagram Telegram


3
Application of Real-Time System:
Digital Control:
● Digital control is one of the most important applications of
real-time systems.
● It involves using computers to monitor and control physical
processes such as industrial machinery, power grids, and traffic
systems. Some typical examples of digital control systems are

1. Industrial control systems: These systems use real-time data to


monitor and control manufacturing processes, such as temperature,
pressure, and flow rate, to ensure efficient and safe operation.

e
2. Power systems: Real-time systems are used to monitor and control
power grids, ensuring that electricity is delivered to homes and

ir
businesses in a reliable and efficient manner.

3. Traffic control systems: Real-time systems are used to monitor traffic


es
flow and adjust traffic signals to optimize traffic flow and reduce
congestion.

4. Robotics: Real-time systems are used to control the movement of


D
robots, enabling them to perform precise tasks in manufacturing and
other industries.

5. Building automation: Real-time systems are used to monitor and


u

control heating, ventilation, and air conditioning systems in buildings,


ensuring a comfortable and safe environment for occupants.
Ed

High-Level Control:
● Real-time systems are commonly used in high-level control
applications where a timely and accurate response to events is
critical. Here are some examples of high-level control applications
that utilize real-time systems:

1. Power grid control: Real-time systems are used in power grid control
for monitoring and controlling the flow of electricity in the grid, ensuring
that demand is met and that there is no disruption to the power supply.

2. Traffic management
3. Building automation

YouTube Facebook Instagram Telegram


4
4. Robotics
5. Process control: Real-time systems are used in process control for
controlling manufacturing processes, chemical processes, and other
industrial processes, ensuring that they are carried out efficiently and
safely.

6. Environmental monitoring

Signal Processing Control:


● Real-time systems are widely used in signal processing and control
applications, where the processing and control of signals or data
must be done in real-time or near-real-time.

e
● Here are some examples of typical real-time applications in signal
processing and control:

ir
1. Digital Signal Processing (DSP): Real-time systems are used in DSP
applications for audio and video processing, image and speech
es
recognition, and data compression.

2. Robotics and automation: Real-time systems are used in robotics and


automation for controlling and coordinating multiple sensors and
D
actuators to perform specific tasks.

3. Power systems: Real-time systems are used in power systems for


monitoring and controlling power grids, generators, and transformers.
u

4. Control systems: Real-time systems are used in control systems for


feedback control of industrial processes, motion control, and
Ed

temperature control.

5. Instrumentation: Real-time systems are used in instrumentation for


real-time data acquisition and analysis, such as in medical devices,
scientific research equipment, and testing and measurement devices.

6. Release Time: In a real-time system, release time refers to the time


when a task or event needs to be executed or completed. Release times
are critical in real-time systems as they help ensure that tasks are
executed within their specified timing constraints.

YouTube Facebook Instagram Telegram


5
7. Deadlines: In real-time systems, deadlines are used to define the time
constraints for processing and response times. A deadline is a specific
point in time by which a real-time task must be completed to achieve its
desired functionality. Deadlines can be classified as hard or soft
deadlines.

8. Hard deadlines: Hard deadlines are critical and must be met, as a


missed deadline can lead to catastrophic consequences.
For example, in a flight control system, a missed deadline could lead to a
plane crash.

9. Soft deadlines: Soft deadlines are less critical, and a missed deadline

e
may not have significant consequences.
For example, in an audio and video processing system, a missed deadline

ir
may result in a momentary glitch in the output, which may be
acceptable.
es
Difference between Hard and Soft Real-Time Systems:

Hard Real-Time System Soft Real-Time System


D
Hard real-time systems have a In a soft real-time system, there is
strict time limit, or we can say no mandatory requirement of
deadlines. It is important to meet completing the deadline for every
those deadlines, otherwise, the task. However, it is good if the
system is considered a system process gets processed according
u

failure. to the given timing requirement,


otherwise, the operation might get
Ed

degraded.

Here the size of a data file is The size is large in the soft
medium or small. real-time system.

It has better utility. It has less utility.

The response time is in The response time is higher.


milliseconds.

It includes short databases. It includes large databases.

In this system, safety is essential. In this system, safety is not


essential.

YouTube Facebook Instagram Telegram


6
It has short-term data integrity. It has long-term data integrity.

These systems are not flexible. These systems are more flexible
They commonly supply than hard real-time systems. They
full-deadline submissions. can manage if the deadline is
missed.

The users of hard real-time The users of soft real-time systems


systems acquire validation when do not acquire validation.
required.

Autopilot systems, aeroplane Personal computers, audio, and


sensors, spacecraft, etc are some video systems, etc are some

e
examples. examples.

ir
Reference Models: Reference models for real-time systems are
important to provide a framework for designing, analyzing, and
es
implementing real-time systems.

Here are some reference models for real-time systems:


D
u
Ed

Basic Model of Real-Time System

Processor model:
● The processor model specifies the characteristics of the processor
used in the real-time system, such as its clock speed, memory size,
and instruction set architecture.
● The processor model is critical because it determines the system's
processing power and affects the system's ability to meet its
real-time requirements.
Resource model:
● The resource model specifies the resources used by the real-time
system, such as memory, I/O devices, and network interfaces.

YouTube Facebook Instagram Telegram


7
● The resource model is essential because it determines the system's
ability to handle and process data and affects the system's ability
to meet its real-time requirements.

Task model:
● The task model specifies the tasks to be performed by the real-time
system, such as data acquisition, processing, and communication.
● The task model is critical because it determines the system's
workload and affects the system's ability to meet its real-time
requirements.

Communication model:

e
● The communication model specifies the communication protocols
and mechanisms used by the real-time system, such as message

ir
passing, shared memory, and sockets.
● The communication model is essential because it affects the
system's ability to handle and process data and affects the system's
es
ability to meet its real-time requirements.

Temporal Parameters of Real-Time Workload:


● In a real-time system, the workload or tasks to be executed have
D
specific temporal parameters that must be met.
● The temporal parameters of real-time workload must be carefully
designed and analyzed to ensure that the system meets its
real-time requirements and operates efficiently and effectively.
u

These parameters include the following.


Ed

Deadline: A deadline is a time by which a task must be completed.


Meeting the deadline is critical in real-time systems, and failure to meet a
deadline can result in catastrophic consequences.

Response time: Response time is the time it takes for a task to be


completed after it is received by the system. The response time must be
short enough to meet the system's real-time requirements.

Period: The period is the time interval between the start of two
consecutive instances of a periodic task. The period is critical because it
determines the system's workload and the timing of the tasks.

YouTube Facebook Instagram Telegram


8
Execution time: Execution time is the time it takes to execute a task. The
execution time must be short enough to meet the system's real-time
requirements and ensure that the task completes before its deadline.

Jitter: Jitter is the variation in the response time or execution time of a


task. Jitter can be caused by various factors, such as scheduling policies,
processor speed variations, and network congestion.

Periodic Task Model:


● The periodic task model is a common task model used in real-time
systems.
● It is used to model tasks that repeat at regular intervals or periods.

e
● The periodic task model specifies the following parameters.

ir
Period: The period is the time interval between the start of two
consecutive instances of a periodic task.
es
Deadline: The deadline is the time by which a periodic task must be
completed.

Worst-case execution time (WCET): The WCET is the maximum time it


D
takes to execute a periodic task.

Average-case execution time (ACET): The ACET is the average time it


takes to execute a periodic task.
u

Priority: The priority specifies the relative importance of a periodic task


Ed

compared to other tasks in the system.

Release time: The release time is the time when a periodic task is first
available for execution.

For example: Consider the task Ti with period = 5 and execution time = 3
Phase is not given so, assume the release time of the first job as zero. So
the job of this task is first released at t = 0, then it executes for 3s, and
then the next job is released at t = 5, which executes for 3s, and the next
job is released at t = 10. So jobs are released at t = 5k where k = 0, 1. . . N

YouTube Facebook Instagram Telegram


9
Hyper period of a set of periodic tasks is the least common multiple of all
the tasks in that set. For example, two tasks T1 and T2 having periods 4
and 5 respectively will have a hyper period, H = lcm(p1, p2) = lcm(4, 5) =
20. The hyper period is the time after which the pattern of job release

e
times starts to repeat.

ir
Precedence Constraints:
● Precedence constraints in a real-time system are used to specify
the order in which tasks must be executed to meet the system's
es
timing requirements.
● Precedence constraints are typically represented as directed
acyclic graphs (DAGs), where nodes represent tasks, and edges
represent the order in which tasks must be executed.
D
Precedence constraints can be classified as hard or soft constraints.

Hard constraints: Hard precedence constraints are constraints that must


u

be met for the system to operate correctly.

For example, if task A produces data that is needed as input to task B,


Ed

then task A must be completed before task B can be started.

Soft constraints: Soft precedence constraints are constraints that can be


relaxed if necessary.

For example, if task C can start as soon as task A is completed, but it is


not critical that task C starts immediately after task A is completed, then
the precedence constraint between task A and task C can be relaxed.

For example: Consider task T having 5 jobs J1, J2, J3, J4, and J5, such that
J2 and J5 cannot begin their execution until J1 completes and there are no
other constraints. The precedence constraints for this example are:

YouTube Facebook Instagram Telegram


10
J1 < J2 and J1 < J5

e
ir
es
Consider another example where a precedence graph is given, and
you have to find precedence constraints.
D
u

From the above graph, we derive the following precedence constraints:


Ed

J1< J2
J2< J3
J2< J4
J3< J4

Data Dependency:
● Data dependency refers to the relationship between different tasks
or processes in a real-time system that depends on the availability
of data.
● In a real-time system, data dependency is critical because the
availability of data affects the timing and completion of tasks.

YouTube Facebook Instagram Telegram


11
Data dependency can be classified into two types:
1. Control dependency:
● Control dependency refers to the relationship between tasks or
processes where one task or process controls the execution of
another task or process.
● Control dependency can be either sequential or concurrent.
Sequential control dependency means that one task must be
completed before another task can start, while concurrent control
dependency means that two or more tasks must execute
simultaneously.

2. Data dependency:

e
● Data dependency refers to the relationship between tasks or
processes where the output of one task or process is used as the

ir
input for another task or process.
● Data dependency can be either read-only or read-write.
● Read-only data dependency means that the output of one task or
es
process is read-only, and it does not affect the input of another
task or process.
● Read-write data dependency means that the output of one task or
process is written to memory and used as input for another task or
process.
D
Example: Consider a real-time system that controls the temperature of a
chemical reactor. The system consists of two tasks: a
temperature-sensing task and a temperature-control task. The
u

temperature sensing task measures the temperature of the reactor and


sends the temperature value to the temperature control task. The
Ed

temperature control task receives the temperature value and decides


whether to increase or decrease the temperature by controlling the
heating or cooling system

YouTube Facebook Instagram Telegram


12
Unit-2
Real-Time Scheduling

● Real-Time Scheduling is designed to ensure that tasks are


executed in a predictable and timely manner, with minimal delay or
latency.
● The scheduler assigns priorities to tasks based on their importance
and urgency and allocates system resources accordingly to ensure
that critical tasks are executed on time.
Real-time scheduling can be classified into two categories: hard
real-time scheduling and soft real-time scheduling.

e
1. Hard real-time scheduling requires that tasks are completed

ir
within a specific deadline, and any delay can result in system
failure or loss of data.
2. Soft real-time scheduling, on the other hand, allows for some
degree of delay but still requires that tasks are completed within a
es
reasonable timeframe.

Common Approaches to Real-Time Scheduling: There are several


common approaches to real-time scheduling that are used to ensure that
D
tasks are executed in a timely and efficient manner.

These approaches include


u

1. Clock-Driven Approach:
● The Clock Driven approach is a real-time scheduling technique that
Ed

uses a fixed clock to divide time into equal intervals and assigns
tasks to each interval based on their deadline and priority.
● In this approach, tasks are assigned to a fixed time slot, and the
scheduler ensures that each task is executed within its allocated
time slot.
● If a task misses its deadline, it is either rescheduled or dropped,
depending on the criticality of the task.
● The Clock Driven approach is helpful in systems that have a high
degree of predictability and where tasks have fixed execution
times.
● It is commonly used in embedded systems, where tasks are
executed in a deterministic and predictable manner.

YouTube Facebook Instagram Telegram


13
● One advantage of the Clock Driven approach is that it provides a
simple and efficient way to allocate system resources, as tasks are
scheduled based on their priority and deadline.
● it can be less flexible than other scheduling techniques, as it
assumes that all tasks have fixed execution times and cannot adapt
to changing system conditions.

Here's an example of how clock-driven scheduling works:

Execution Time (C) Task A Task B Task C

0 ms

e
10 ms A

ir
20 ms B

30 ms es C

40 ms A

50 ms B
D
60 ms C

70 ms A

80 ms B
u

90 ms C

100 ms A
Ed

In this example,
● The system clock is divided into time slots of 10 ms, and three
tasks, A, B, and C, are scheduled to run.
● Task A is assigned the first time slot, task B is assigned the second
time slot, and task C is assigned the third time slot.
● At time 0 ms, no tasks are running.
● At time 10 ms, task A starts running and continues until the end of
its time slot at 20 ms.
● At time 20 ms, task B starts running and continues until the end of
its time slot at 30 ms.

YouTube Facebook Instagram Telegram


14
● At time 30 ms, task C starts running and continues until the end of
its time slot at 40 ms.
● The process repeats until all tasks have completed their execution.

2. Weighted Round Robin (WRR):


● Weighted Round Robin (WRR) is a real-time scheduling algorithm
that is an extension of the Round Robin approach.
● In this approach, tasks are assigned a weight value that determines
the amount of CPU time they receive during each round.
● Tasks with higher weight values are allocated more CPU time than
tasks with lower weight values.

e
Here is an example diagram of the weighted round-robin approach:

ir
Task A: weight 3, requires 15 ms to complete
Task B: weight 2, requires 10 ms to complete
es
Task C: weight 1, requires 5 ms to complete
Time quantum: 5 ms
D
Execution Time (C) Task A Task B Task C

0 - 5 ms A

5 - 10 ms B
u

10 -15 ms 15
Ed

15 -20 ms A -

20 -25 ms B -

25 -30 ms A - -

● At time 0 ms, the scheduler starts with Task A since it has the
highest weight.
● Task A is executed for the first time quantum of 5 ms, until time 5
ms.
● Since Task A has not been completed, it is moved to the end of the
queue, and the scheduler switches to Task B, which is executed for
the next time quantum of 5 ms, until time 10 ms.
YouTube Facebook Instagram Telegram
15
● Since Task B has not been completed, it is moved to the end of the
queue, and the scheduler switches to Task C, which is executed for
the next time quantum of 5 ms, until 15 ms.
● Since Task C has been completed, it is removed from the schedule
and the scheduler switches back to Task A, which is executed for
the next time quantum of 5 ms, until time 20 ms.
● Since Task A has not been completed, it is moved to the end of the
queue, and the scheduler switches back to Task B, which is
executed for the next time quantum of 5 ms, until time 25 ms.
● Since Task B has been completed, it is removed from the schedule,
and the scheduler switches back to Task A, which is executed for
the final time quantum of 5 ms, until time 30 ms.

e
ir
3. Priority-Driven Approach:
● The priority-driven approach is a real-time scheduling algorithm
that assigns a priority to each task, based on its importance or
es
urgency.
● The scheduler then schedules tasks based on their priority, with
higher-priority tasks being executed before lower-priority tasks.
D
Here is an example of the priority-driven approach:

Task A: priority 1, requires 20 ms to complete


Task B: priority 2, requires 15 ms to complete
u

Task C: priority 3, requires 10 ms to complete


Ed

Task A Task B Task C

20 ms 15 ms 10 ms

0-20 20-35 35-45

● At time 0 ms, the scheduler starts with Task A since it has the
highest priority.
● Task A is executed until it completes at 20 ms.
● Since Task A has been completed, the scheduler switches to Task B,
which is the next highest priority task.
● Task B is executed until it completes at 35 ms.
● Finally, the scheduler switches to Task C, which is the
lowest-priority task.

YouTube Facebook Instagram Telegram


16
● Task C is executed until it completes at 45 ms.

Dynamic Vs Static Systems:

Feature Dynamic System Static System

Definition A system that changes over A system that does not


time in response to change over time and has
external events or inputs a fixed configuration

Task arrival Tasks arrive at runtime All tasks are known in

e
advance

ir
Scheduling High, due to the need to Low, since the schedule is
complexity respond to runtime fixed
changes
es
Resource Resources are allocated Resources can be
allocation dynamically based on the pre-allocated based on
current workload or known requirements
demand
D
Predictability Less predictable, due to More predictable since all
runtime changes in task requirements are known
arrival and resource in advance
demands
u

Flexibility More flexible since the Less flexible since the


system can adapt to system is fixed
Ed

changing demands

Performance May be lower due to Higher, since the system


runtime overhead and can be optimized for
resource allocation delays known requirements

Examples Online reservation Assembly lines, air traffic


systems, traffic control systems, power
management systems, grid systems
adaptive control systems

YouTube Facebook Instagram Telegram


17
The EffectiveDeadlineFirst (EDF) algorithm:
● The Effective Deadline First (EDF) algorithm is a real-time
scheduling algorithm that schedules tasks based on their deadline.
● The task with the earliest deadline is given the highest priority and
is scheduled first.
● This algorithm ensures that tasks with the shortest deadlines are
executed first, which can help to meet critical timing requirements
in real-time systems.

Here's an example of how the EDF algorithm works:

Task A: deadline 5 ms, requires 10 ms to complete

e
Task B: deadline 3 ms, requires 5 ms to complete
Task C: deadline 7 ms, requires 15 ms to complete

ir
Execution Time (C) es Task A Task B Task C

0 - 5 ms B

5 - 15 ms A

15 - 30 ms C
D
● At time 0 ms, the scheduler starts with Task B since it has the
earliest deadline.
● Task B is executed until it completes at 5 ms.
u

● Since Task B has been completed, the scheduler switches to Task A,


which has the next earliest deadline.
Ed

● Task A is executed until it completes at 15 ms.


● Finally, the scheduler switches to Task C, which has the latest
deadline. Task C is executed until it completes at 30 ms.

Least-Slack-Time-First (LST) Algorithms:


● Least-Slack-Time-First (LST) is a real-time scheduling algorithm
that is used to optimize the scheduling of tasks based on their
deadlines and processing times.
● LST chooses the task with the least slack time as the next task to be
executed.
● Slack time is defined as the difference between a task's deadline
and the time remaining until the task must be completed.

YouTube Facebook Instagram Telegram


18
● For example, if a task has a deadline of 100ms and has already been
running for 80ms, then its slack time is 20ms (i.e., the difference
between 100ms and 80ms).

The LST algorithm works as follows:


1. Initialize the slack time for all tasks in the system.
2. Select the task with the smallest slack time as the next task to be
executed.
3. Execute the selected task for the remaining slack time.
4. Recalculate the slack time for all tasks that have not been
completed.
5. Repeat steps 2-4 until all tasks have been completed.

e
Remark: If a task's slack time is negative, it means that the task has

ir
already exceeded its deadline, and it should be scheduled immediately to
avoid missing the deadline. In this case, LST would prioritize the task
with the smallest negative slack time.
es
Here's an example of how the LST algorithm works:

Task A: deadline 50 ms, requires 20 ms to complete


D
Task B: deadline 30 ms, requires 15 ms to complete
Task C: deadline 40 ms, requires 10 ms to complete

Execution Time (C) Task A Task B Task C


u

0 - 15 ms B
Ed

15 - 25 ms C

25 - 45 ms A

● At time 0 ms, all tasks are ready to be executed, so the scheduler


selects the task with the smallest slack time, which is Task B.
● Task B is executed until it completes at 15 ms.
● At 15 ms, the scheduler checks the slack times of the remaining
tasks.
● Task A has a slack time of 35 ms, and Task C has a slack time of 25
ms.
● Since Task C has the smallest slack time, it is given the highest
priority and is executed next.

YouTube Facebook Instagram Telegram


19
● Task C is executed until it completes at 25 ms.
● Finally, the scheduler switches to Task A, which has the largest
slack time.
● Task A is executed until it completes at 45 ms.

EffectiveDeadlineFirst (EDF) V̀s Least-Slack-Time-First (LST)


Algorithms:

Feature EDF LST

Definition Prioritizes tasks based Prioritizes tasks based on


on their relative the amount of time

e
deadlines. remaining until their
deadlines.

ir
Type of Preemptive Preemptive
scheduling es
Algorithm Moderate Moderate
complexity

Real-time Suitable for hard Suitable for Soft real-time


D
systems real-time systems systems

Deadline Each task must have a Each task must have a


Requirement deadline specified. deadline specified.

Selection Tasks with the earliest Tasks with the least slack
u

Criteria deadlines are given time (time remaining until


priority. the deadline) are given
Ed

priority.

Utilization is Better for Better for low utilization


high-utilization systems. systems.

Rate Monotonic Algorithm:


● The Rate Monotonic (RM) algorithm is a real-time scheduling
algorithm that assigns priorities to tasks based on their periods,
with shorter-period tasks having higher priority.
● This algorithm assumes that the tasks are periodic and that their
execution times are constant and known in advance.

YouTube Facebook Instagram Telegram


20
Here's an example of how the RM algorithm works:

Task A: period 50 ms, requires 20 ms to complete


Task B: period 30 ms, requires 15 ms to complete
Task C: period 40 ms, requires 10 ms to complete

Execution Time (C) Task A Task B Task C

0 - 15 ms B

15 - 25 ms C

e
25 - 45 ms A

ir
● At time 0 ms, all tasks are ready to be executed, so the scheduler
selects the task with the highest priority, which is Task B.
● Task B is executed until it completes at 15 ms.
es
● At 15 ms, the scheduler checks which task is ready to be executed
next.
● Since Task C has a period of 40 ms, it is prepared to be executed
next.
D
● Task C is executed until it completes at 25 ms.
● At 25 ms, the scheduler checks which task is prepared to be
executed next.
● Since Task A has a period of 50 ms, it is prepared to be executed
next.
u

● Task A is executed until it completes at 45 ms.


The scheduler then repeats the cycle, executing the tasks in the order
Ed

of priority.

Offline Vs Online Scheduling:

Offline Scheduling Online Scheduling

Offline scheduling involves online scheduling involves


creating a schedule for all tasks in scheduling tasks dynamically
advance, before the execution of during the execution of the
the system. system.

YouTube Facebook Instagram Telegram


21
The schedule is then followed The scheduler determines which
during the execution of the task to execute next based on the
system. current state of the system, the
task's priority, and its timing
requirements.

In offline scheduling, the This approach is suitable when the


scheduler has complete task workload and timing
information about the tasks and requirements are not known in
can optimize the schedule advance,
accordingly.

this approach does not handle Online scheduling is more flexible

e
unexpected events or changes in and adaptable to changes, but it
task requirements that occur may not be able to optimize the

ir
during the execution. schedule as effectively as offline
scheduling
es
Difference between Sporadic and Aperiodic Real-time Tasks:
D
SPORADIC TASK Aperiodic Task

It has a hard deadline. It has a soft deadline or no


deadline.
u

It is a highly critical task. It is a low or moderate critical


task.
Ed

The minimum separation between the minimum separation between


two consecutive instances can not two consecutive instances can be
be zero. zero.

It includes hard real-time tasks. It includes soft real-time tasks.

The deadline for all instances of Meeting the deadline of all


sporadic tasks can be met easily. instances of aperiodic tasks is
difficult.

It goes through an acceptance No testing is performed on the


test. aperiodic task.

YouTube Facebook Instagram Telegram


22
It is executed only when sufficient Its execution does not depend on
slack time is available. available slack time.

It gets rejected by the scheduler It never gets rejected by the


when there is less slack time. scheduler.

It includes commands given by the It includes interactive commands


system. given by the user.

Example: Security alert program Example: Logging task in the


in the system. system.

e
Scheduling Aperiodic and Sporadic jobs in Priority Driven:

ir
● In priority-driven scheduling, tasks are assigned priorities based
on their importance and urgency.
● The scheduler then selects the task with the highest priority to
execute next.
es
● Aperiodic and sporadic jobs are two types of tasks that can be
scheduled in a priority-driven system.

Aperiodic tasks are tasks that do not have a regular or periodic


D
execution pattern, and they can arrive at any time.
● These tasks have specific deadlines or response times, and they
must be executed within those deadlines.
● Examples of aperiodic tasks include user inputs, interrupts, and
u

system events.
Ed

Sporadic tasks are a type of aperiodic task that has a minimum


inter-arrival time between successive task arrivals.
● Sporadic tasks have a deadline or response time, just like aperiodic
tasks, but they also have a minimum time between successive
arrivals.
● Examples of sporadic tasks include periodic system health checks,
periodic communication tasks, or periodic maintenance tasks.

Priority-driven scheduling can handle both aperiodic and sporadic tasks


by assigning priorities to them based on their timing requirements and
importance. A high-priority task can preempt a low-priority task,
allowing the system to ensure that aperiodic and sporadic tasks meet
their timing requirements.

YouTube Facebook Instagram Telegram


23
For example, let's consider a system that has three tasks:
1. Aperiodic task A, sporadic task B, and periodic task C.
2. Task A has a deadline of 50 ms, task B has a minimum inter-arrival
time of 100 ms and a deadline of 150 ms, and task C has a period of
200 ms and a deadline of 190 ms.
3. The system assigns task A the highest priority, followed by task B
and task C.
4. If task A arrives at time 0, the scheduler will execute it immediately
since it has the highest priority.
5. If task B arrives at a time of 70 ms, the scheduler will preempt the
execution of task A and execute task B since it has a higher priority
than task A.

e
6. Once task B is complete, the scheduler will return to executing task
A.

ir
7. If task C arrives at a time of 200 ms, the scheduler will execute it
since it is the highest-priority task at that time.
8. If task C is not complete before its deadline, it will be considered a
missed deadline.
es
In this way, priority-driven scheduling can handle aperiodic and
sporadic tasks by assigning priorities based on their timing requirements
and ensuring they meet their deadlines.
D
Scheduling Aperiodic and Sporadic jobs in Clock-Driven Systems:
● Clock-driven scheduling systems are used in real-time systems to
u

schedule periodic tasks that execute at regular intervals.


● However, these systems can also support aperiodic and sporadic
Ed

tasks with the help of additional scheduling algorithms.


● One way to schedule aperiodic and sporadic tasks in clock-driven
systems is to use priority-based scheduling algorithms.
● In these algorithms, tasks are assigned priorities based on their
importance, and the scheduler selects the task with the highest
priority for execution.

For example, consider a clock-driven system that has two periodic


tasks that execute every 50ms and 100ms respectively.
1. The system also has two aperiodic tasks and one sporadic task.
2. Task A is an aperiodic task that requires immediate attention, task
B is an aperiodic task that can wait for up to 30ms, and task C is a
sporadic task that must execute within 20ms of its arrival.

YouTube Facebook Instagram Telegram


24
3. Task A can be assigned the highest priority, followed by task C and
task B.
4. The periodic tasks can be assigned lower priorities than the
aperiodic and sporadic tasks.
5. During execution, the scheduler checks for new arrivals of
aperiodic and sporadic tasks and assigns them priorities based on
their requirements.
6. The scheduler then selects the task with the highest priority for
execution.

e
ir
es
D
u
Ed

YouTube Facebook Instagram Telegram


25
Unit-3
Resources Sharing

● Resource sharing in real-time systems involves multiple tasks


sharing access to system resources such as memory, CPU, and I/O
devices.
● The goal of resource sharing is to efficiently utilize system
resources while ensuring that each task gets the resources it needs
to execute correctly and meet its timing requirements.

e
ir
es
D
Effect of Resource Contention:
● Resource contention occurs when multiple processes or threads
compete for the same limited resources, such as CPU time, memory,
disk I/O, or network bandwidth.
u

● This can lead to various performance and stability issues, as each


process or thread may have to wait for access to the resource,
causing delays and potentially even deadlocks.
Ed

For Example:
● Imagine a computer program that needs to read and write data to a
shared file on a disk.
● If multiple instances of the program are running simultaneously,
they may all try to access the file at the same time, causing resource
contention.
● This could lead to delays in reading or writing the file, or even data
corruption if the program needs to handle concurrent access
correctly.

YouTube Facebook Instagram Telegram


26
Resource Access Control (RAC):
● In a real-time system, Resource Access Control (RAC) is a
mechanism used to manage access to critical resources such as the
CPU, memory, and I/O devices.
● Real-time systems often have strict timing requirements, which
means that resource access must be controlled carefully to prevent
delays or conflicts that can affect system performance or even
cause a system failure.
● RAC ensures that resources are allocated in a predictable and
efficient manner and that they are only used by authorized
processes.

e
For Example:
● In a real-time operating system, RAC can be used to control access

ir
to the CPU by assigning different priorities to processes based on
their criticality.
● This ensures that processes with higher priority are given access to
es
the CPU first and that they are not delayed by lower-priority
processes.

Overall, RAC is an essential mechanism in real-time systems that helps to


D
manage resource access and ensure system performance and reliability.
u
Ed

● In this diagram, a process requests access to a resource, such as the


CPU or memory.
● The request is then passed to the Resource Access Control (RAC)
module, which determines whether the process is authorized to

YouTube Facebook Instagram Telegram


27
access the resource and if so, how much access it should be
granted.

Non-preemptive Critical Sections:


● In a real-time system, a critical section is a portion of code that
accesses shared resources, such as memory or I/O devices, and
must be executed atomically to ensure correctness and prevent
conflicts.
● Non-preemptive critical sections are a synchronization mechanism
that ensures that only one process can execute a critical section at
a time.
● In a non-preemptive system, once a process has entered a critical

e
section, it cannot be preempted by another process until it has
completed the critical section and released the shared resources.

ir
Here's a simple example to illustrate non-preemptive critical
sections: es
● Consider a real-time system with two processes, A and B, that both
need to access a shared resource.
● If process A enters the critical section first, it will hold the resource
until it completes the section and releases the resource.
D
● During this time, process B cannot enter the critical section, and
must wait until process A has released the resource.
● If the system is non-preemptive, then even if process B has a higher
priority than process A, it cannot preempt process A and enter the
critical section until it has been released by process A.
u

Non-preemptive critical sections can be implemented using software


Ed

locks, such as mutexes or semaphores. When a process enters a critical


section, it acquires the lock and holds it until it has completed the
section and released the lock.

Basic Priority-Inheritance:
● In a real-time system, priority inheritance is a synchronization
mechanism that helps to prevent priority inversion and ensure that
critical sections are executed in a timely and efficient manner.
● Priority inversion occurs when a high-priority task is blocked by a
lower-priority task that is holding a shared resource, such as a
semaphore or a mutex.
● This can lead to delays or missed deadlines and can have serious
consequences in safety-critical systems.

YouTube Facebook Instagram Telegram


28
Here's a simple example to illustrate basic priority inheritance

● Consider a real-time system with three tasks: task A, task B, and


task C.
● Task A has the highest priority, followed by task B, and then task C.
● All three tasks need to access a shared resource, such as a
semaphore.
● If task A enters the critical section first, it will acquire the
semaphore and hold it until it completes the critical section.
● During this time, task B and task C cannot enter the critical section
and must wait until task A releases the semaphore.

e
● However, if task C needs to access the semaphore while task A is

ir
holding it, it will be blocked and unable to proceed.
● This can cause a priority inversion, as task A is holding the
semaphore and has a higher priority than task C, but task C cannot
es
proceed until it is released.

In a system with basic priority inheritance, task C would inherit the


priority of task A while it is blocked, so that it can proceed and release
D
the semaphore. This ensures that the critical section is executed in a
timely and efficient manner, and prevents delays or missed deadlines.
u

Priority-Ceiling Protocols: Priority-ceiling protocols are a


synchronization mechanism used in real-time systems to prevent priority
inversion, prevent deadlocks and ensure that critical sections are
Ed

executed efficiently.

● In a priority-ceiling protocol, each shared resource is assigned a


priority ceiling, which is the highest priority of any task that can
potentially access the resource.
● When a task needs to access a shared resource, it raises its priority
to the priority ceiling of the resource.
● This ensures that no other task with a higher priority can block the
task while it is holding the resource.

Here's a simple example to illustrate priority-ceiling protocols:


● Consider a real-time system with two tasks, task A and task B, and
a shared resource, such as a semaphore or a mutex.

YouTube Facebook Instagram Telegram


29
● Task A has a higher priority than task B.
● If task A enters the critical section first and acquires the resource, it
will hold the resource until it completes the critical section and
releases the resource.
● During this time, task B cannot enter the critical section and must
wait until task A releases the resource.

However, if task B has a higher priority than the priority of the resource,
it can preempt task A and block it, causing a priority inversion.

● When task A enters the critical section and acquires the resource, it
raises its priority to the priority ceiling of the resource.

e
● This ensures that task B cannot preempt task A while it is holding
the resource, preventing priority inversion.

ir
Here's a diagram that shows how the PCP works:

Time
es
High Priority Task Low Priority Task Shared Resource

0 Acquires Resource
D
1 Tries to acquire
Resource

2 Blocks (Priority
u

Ceiling)

3 Uses Resource
Ed

4 Releases Resource

5 Acquires Resource

6 Blocks (Priority
Ceiling)

7 Uses Resource

8 Releases Resource

YouTube Facebook Instagram Telegram


30
Stack-Based Priority-Ceiling Protocol:
● Stack-Based Priority Ceiling Protocol (SBPCP) is a synchronization
mechanism used in real-time systems to prevent deadlocks and
priority inversion caused by shared resources.
● It is an extension of the priority-ceiling protocol that uses a stack
to maintain the priority ceilings of multiple shared resources.
● In SBPCP, each shared resource is assigned a priority ceiling, which
is the highest priority of any task that can potentially access the
resource.
● When a task needs to access a shared resource, it pushes the
priority ceiling of the resource onto a stack and raises its priority to
the top of the stack.

e
ir
es
D
u
Ed

Here's a simple example to illustrate the stack-based priority-ceiling


protocol:

● Consider a real-time system with three tasks, task A, task B, and


task C, and two shared resources, resource 1 and resource 2.
● Task A has the highest priority, followed by task B, and then task C.

YouTube Facebook Instagram Telegram


31
● Resource 1 has a priority ceiling equal to the priority of task A, and
resource 2 has a priority ceiling equal to the priority of task B.
● If task A enters the critical section first and acquires resource 1, it
will push the priority ceiling of resource 1 onto the stack and raise
its priority to the priority of task A.
● If task B then tries to enter the critical section and acquire resource
2, it will push the priority ceiling of resource 2 onto the stack,
which is higher than the priority ceiling of resource 1.
● As a result, task B will preempt task A and acquire resource 2.
● If task C then tries to enter the critical section and acquire resource
1, it will push the priority ceiling of resource 1 onto the stack,
which is lower than the priority ceiling of resource 2.

e
● As a result, task C will block until task B releases resource 2.

ir
When a task releases a shared resource, it pops the priority ceiling of the
resource from the stack, and its priority is lowered to the new top of the
stack. This ensures that the priority of the task is lowered to the highest
priority ceiling of any remaining shared resources it holds.
es
Dynamic Priority Systems: In real-time systems, tasks are executed
with strict timing requirements. Dynamic priority systems are a common
scheduling technique used in real-time systems to manage the execution
D
of tasks.

● In dynamic priority systems, tasks are assigned a priority value


based on their current state and the system's current state.
u

● The priority value is computed dynamically at run-time, which


means it can change over time based on the system's workload and
task characteristics.
Ed

● Tasks with higher priority values are executed before tasks with
lower priority values.
● This means that the system will always prioritize the most critical
tasks first, ensuring that they meet their timing requirements.

The priority value of a task can be influenced by several factors,


including

1. Task deadline - tasks with closer deadlines are given higher priority.
2. Task importance - tasks that are more critical to the system's
operation are given higher priority.

YouTube Facebook Instagram Telegram


32
3. Task dependencies - tasks that depend on other tasks are given lower
priority.
4. System load - when the system is overloaded, lower-priority tasks
may be delayed to ensure that higher-priority tasks meet their deadlines.

Use of Priority-Ceiling Protocol in Dynamic Priority Systems: In


dynamic priority systems, where task priorities are computed
dynamically at runtime, the Priority Ceiling Protocol is used to ensure
that tasks execute without priority inversion.
● The Priority Ceiling Protocol works by assigning a priority ceiling
to each shared resource, which is the highest priority of any task
that can use the resource.

e
● When a task requests a shared resource, its priority is temporarily
raised to the priority ceiling of the resource.

ir
● This ensures that if a lower-priority task holds the resource, it
cannot block a higher-priority task that requires the resource.

For Example:
es
● Suppose there are two tasks, T1 and T2, with dynamic priorities,
and a shared resource R.
● Task T1 has a higher priority than Task T2.
D
● Without the Priority Ceiling Protocol, T2 could potentially hold the
resource R, preventing T1 from executing and meeting its timing
requirements.

However, with the Priority Ceiling Protocol, the priority ceiling of the
u

shared resource R would be set to the priority of T1. Therefore, when T2


requests the resource R, its priority is temporarily raised to the priority
Ed

of T1, preventing it from blocking T1.

Preemption Ceiling Protocol: The Preemption Ceiling Protocol (PCeP) is


a synchronization protocol used in real-time systems to prevent the
problem of priority inversion, which can occur when a low-priority task
holds a resource required by a higher-priority task.

● In the Preemption Ceiling Protocol, each resource is assigned a


preemption ceiling priority, which is the highest priority of any
task that can use the resource.
● When a task requests a resource, its priority is raised to the
preemption ceiling of the resource.

YouTube Facebook Instagram Telegram


33
● This ensures that if a lower-priority task holds the resource, it can
be preempted by a higher-priority task that requires the same
resource.

For Example:
● Consider a real-time system with two tasks, T1 and T2.
● Task T1 has a higher priority than Task T2, and they both require
access to a shared resource R, which has a preemption ceiling
priority equal to the priority of T1.
● If T2 attempts to access resource R while T1 is currently holding it,
T2's priority is raised to that of T1.
● This means that T2 can preempt T1 and access resource R, ensuring

e
that the system meets its timing requirements.

ir
Access Control in Multiple-Unit Resources: Access control in
multiple-unit resources refers to the management of shared resources
that have more than one unit or instance.
es
Examples of such resources include printers, disk drives, and
communication channels.

To manage access to multiple-unit resources, access control techniques


D
such as locking and reservation are used.

Locking: In locking, a task can lock one or more units of the resource
before accessing them. Once a unit is locked, no other task can access it
u

until the lock is released. This technique is commonly used in systems


where tasks need exclusive access to the resource units, such as memory
banks or I/O devices.
Ed

Reservation: In reservation, a task requests a unit of the resource in


advance to reserve it for future use. When the task needs the resource
unit, it can access it without waiting or competing with other tasks for
access. This technique is commonly used in systems where tasks need
periodic access to the resource units, such as processors or
communication channels.

In access control for multiple-unit resources, several issues need to be


considered to ensure that the resource is shared efficiently and safely
among multiple users. These issues include:
1. Allocation: Determining how to allocate resource units to different
users or processes.

YouTube Facebook Instagram Telegram


34
For example, in the case of printers, deciding how to allocate printing
jobs to different printers.

2. Synchronization: Ensuring that only one user or process accesses a


resource unit at any given time. This is typically achieved through
locking or queuing mechanisms.

3. Deadlock avoidance: Avoid situations where two or more users or


processes are waiting for each other to release a resource unit, resulting
in a deadlock.

4. Priority: Determining the order in which users or processes are

e
granted access to the resource units.

ir
Access control for multiple-unit resources often involves the use of
specialized algorithms and protocols to address these issues.
For example, the Spooling protocol is commonly used for print spooling,
es
where multiple print jobs are queued and allocated to available printers.

Controlling Concurrent Accesses to Data Objects: Controlling


D
concurrent accesses to data objects refers to the process of managing
access to shared data objects in a system where multiple tasks or
processes can access the same data simultaneously.
u

● Concurrent access to data objects can cause data inconsistencies


and conflicts, which can result in incorrect system behaviour.
● Therefore, controlling concurrent access is essential to ensure the
Ed

reliability and correctness of real-time systems.

To control concurrent accesses, different synchronization techniques


can be used, such as

1. Mutual Exclusion:
● Mutual exclusion is a technique that ensures only one task or
process can access a data object at a time.
● The most common way to implement mutual exclusion is through
the use of locks.

YouTube Facebook Instagram Telegram


35
● When a task wants to access a data object, it acquires a lock on the
object, which prevents other tasks from accessing it until the lock
is released.

2. Semaphores:
● A semaphore is a synchronization technique that allows multiple
tasks to access a data object simultaneously up to a certain limit.
● A semaphore maintains a count of the number of tasks currently
accessing the object and restricts access to a specified number of
tasks at a time.

3. Monitors:

e
● A monitor is an abstract data type that encapsulates data and
methods that manipulate the data.

ir
● A monitor ensures that only one task at a time can execute the
methods of the monitor, thereby ensuring mutual exclusion.

4. Read-Write Locks:
es
● Read-Write Locks are a synchronization technique that allows
multiple tasks to read the data object simultaneously while
preventing simultaneous writes.
D
● When a task wants to read the data object, it acquires a read lock
on the object, which allows multiple readers to access the data.
● However, when a task wants to write to the data object, it acquires
a write lock, which prevents other tasks from reading or writing to
the data object.
u
Ed

YouTube Facebook Instagram Telegram


36
Unit-4
Real-Time Communication

Basic Concepts in Real-time Communication: Real-time


communication refers to any form of communication where there is an
immediate exchange of information between two or more parties. Some
common examples of real-time communication include phone calls,
video conferencing, instant messaging, and live streaming.

Here are some basic concepts in real-time communication:


1. Latency:

e
● Latency refers to the delay between when data is sent and when it
is received.

ir
● In real-time communication, low latency is crucial to ensure that
communication is smooth and uninterrupted.

2. Bandwidth:
es
● Bandwidth refers to the amount of data that can be transmitted
over a network at any given time.
● The higher the bandwidth, the more data can be transmitted, which
D
is important in real-time communication for maintaining a good
quality of service.

3. Packet loss:
● Packet loss refers to the loss of data packets during transmission.
u

● In real-time communication, even a small amount of packet loss


can cause disruptions and make the communication experience
Ed

less smooth.

4. Jitter:
● Jitter is the variation in latency over time.
● In real-time communication, jitter can cause delays and
interruptions, which can lead to a poor user experience.

5. Quality of Service (QoS):


● Quality of Service is a measure of the level of service that a
network provides.
● In real-time communication, QoS is important for ensuring that the
communication is of high quality and that any issues are quickly
resolved.

YouTube Facebook Instagram Telegram


37
6. Codecs:
● Codecs are used to encode and decode audio and video data for
transmission over a network.
● Different codecs have different levels of compression and quality,
which can affect the overall quality of real-time communication.
● Some popular codecs used in real-time communication include
H.264 for video and Opus for audio.

Real-time communication (RT) systems can be categorized into two


main types: Soft real-time systems and Hard real-time systems.

Basic concepts of soft Real-time communication (RT) systems:

e
● Soft real-time communication systems are those that have timing
requirements that are not as strict as hard real-time systems.

ir
● They allow some flexibility in meeting timing deadlines, and
missing a deadline may not result in catastrophic consequences.
● Here are some basic concepts of soft real-time communication
systems.
es
1. Timing Constraints:
● Soft real-time communication systems have timing constraints that
D
must be met to ensure optimal performance.
● However, there is some flexibility in meeting these timing
constraints, and a missed deadline is not critical.
● This is in contrast to hard real-time systems, where a missed
deadline can have serious consequences.
u

2. Quality of Service (QoS):


Ed

● QoS is important in soft real-time communication systems to


ensure that data is delivered in a timely and efficient manner,
without causing major disruptions or delays.
● This includes considerations such as bandwidth, latency, packet
loss, and jitter.

3. Data Compression:
● Soft real-time communication systems often use data compression
techniques to minimize the amount of data that needs to be
transmitted.
● This helps to reduce bandwidth requirements and improve the
overall performance of the system.

YouTube Facebook Instagram Telegram


38
4. Feedback Control:
● Feedback control mechanisms can be used in soft real-time
communication systems to adjust the timing and QoS parameters
as needed to ensure optimal performance.

5. Network Protocols:
● Soft real-time communication systems rely on network protocols to
ensure that data is transmitted reliably and efficiently over the
network.
● Some common protocols used in soft real-time communication
systems include RTP (Real-time Transport Protocol) for audio and

e
video streaming and SIP (Session Initiation Protocol) for voice and
video calls.

ir
6. Resource Allocation:
● Soft real-time communication systems must allocate resources,
es
such as CPU time and memory, efficiently to ensure that the system
meets its timing and QoS requirements.
● This may involve using specialized hardware or software to
optimize performance.
D
Basic concepts of Hard Real-time communication (RT) systems:
● Hard real-time communication (RT) systems are those that have
u

strict timing requirements and a missed deadline can have


catastrophic consequences.
Here are some basic concepts of hard real-time communication
Ed

systems:
1. Timing Constraints:
● Hard real-time communication systems have strict timing
constraints that must be met to ensure optimal performance.
● A missed deadline can result in serious consequences such as
system crashes, data loss, or physical harm to users.

2. Predictability:
● Hard real-time communication systems are designed to be
predictable and deterministic, with known worst-case execution
times for all operations.

YouTube Facebook Instagram Telegram


39
● This is necessary to ensure that timing constraints can be met
reliably and without fail.

3. Resource Reservation:
● In hard real-time communication systems, resources such as CPU
time, memory, and network bandwidth are often reserved in
advance to ensure that they are available when needed.
● This may involve using specialized hardware or software to allocate
resources efficiently.

4. Fault Tolerance:
● Hard real-time communication systems often incorporate

e
fault-tolerance mechanisms to ensure that the system continues to
operate correctly in the event of hardware or software failures.

ir
● This may involve redundancy, error detection and correction, or
failover mechanisms.
es
5. Priority-Based Scheduling:
● Priority-based scheduling is often used in hard real-time
communication systems to ensure that tasks are executed in the
correct order and with the necessary timing constraints.
D
● Higher-priority tasks are executed before lower-priority tasks, and
tasks with critical timing requirements are given the highest
priority.
u

6. Real-Time Operating Systems (RTOS):


● Real-time operating systems are specialized operating systems
designed specifically for hard real-time communication systems.
Ed

● They provide deterministic scheduling, resource management, and


other features that are necessary to ensure that timing constraints
can be met reliably.

Model of Real-Time Communication: Real-time communication (RTC) is


a model of communication that involves sending and receiving data in
real time.

YouTube Facebook Instagram Telegram


40
The model typically includes the following components:

e
ir
es
D
1. Sender:
● The sender is the entity that initiates the communication by
sending data to the receiver.
● In RTC, the sender may be a person using a device such as a phone,
u

computer, or video conferencing system.


Ed

2. Network:
● The network refers to the infrastructure that carries the data
between the sender and the receiver.
● This may include wired or wireless networks, such as the internet,
cellular networks, or local area networks.

3. Protocol:
● The protocol is the set of rules that governs how the data is
transmitted between the sender and the receiver.
● In RTC, the protocol must be designed to handle real-time
constraints, such as low latency and low packet loss, to ensure that
the communication is timely and efficient.

YouTube Facebook Instagram Telegram


41
4. Receiver:
● The receiver is the entity that receives the data from the sender.
● In RTC, the receiver may be a person using a device such as a
phone, computer, or video conferencing system.

5. Media:
● The media refers to the type of data being transmitted between the
sender and receiver.
● This may include audio, video, or other types of data.

6. Codec:

e
● The codec is the software or hardware that compresses and
decompresses the media to reduce the amount of data that needs to
be transmitted.

ir
● In RTC, codecs are designed to be efficient and fast to minimise
latency and ensure real-time performance.

7. Application:
es
● The application is the software or system that enables real-time
communication between the sender and receiver.
● This may include communication platforms, video conferencing
D
systems, or messaging applications.

Overall, the model of real-time communication involves a sender


sending data over a network using a protocol that is optimized for
u

real-time performance, with the receiver receiving the data and using an
application to enable communication. This model is essential for many
applications, such as video conferencing, online gaming, and real-time
Ed

collaboration, where timely and efficient communication is critical.

Priority-Based Service Disciplines for Switched Networks:


● Priority-based service disciplines are used in switched networks to
provide different levels of service to different types of traffic based
on their priority.
● This allows high-priority traffic, such as real-time communication
traffic, to receive preferential treatment over lower-priority traffic,
such as file transfers or email.

YouTube Facebook Instagram Telegram


42
e
ir
There are several types of priority-based service disciplines used in
switched networks, including:
1. Priority Queuing (PQ):
es
● In PQ, traffic is divided into multiple priority queues, with each
queue assigned a different priority level.
● Higher-priority traffic is transmitted before lower-priority traffic,
regardless of how much traffic is waiting in each queue.
● This ensures that high-priority traffic receives preferential
D
treatment.

2. Weighted Fair Queuing (WFQ):


● In WFQ, traffic is divided into multiple flows, with each flow
u

assigned a different weight.


● Traffic is then transmitted in a round-robin fashion, with each flow
Ed

receiving a proportional amount of bandwidth based on its weight.


● This ensures that high-priority flows receive more bandwidth than
lower-priority flows.

3. Class-Based Queuing (CBQ):


● In CBQ, traffic is divided into different classes based on its priority,
with each class assigned a different bandwidth allocation.
● Higher-priority traffic is allocated more bandwidth than
lower-priority traffic, which ensures that high-priority traffic
receives preferential treatment.

YouTube Facebook Instagram Telegram


43
4. Hierarchical Fair Service Curve (HFSC):
● In HFSC, traffic is divided into different classes based on its
priority, with each class assigned a service curve that specifies its
bandwidth allocation over time.
● Traffic is then transmitted based on its service curve, with
higher-priority traffic receiving more bandwidth than
lower-priority traffic.

Weighted Round-Robin Service Disciplines for Switched Networks:


● Weighted Round-Robin (WRR) is a service discipline used in

e
switched networks to allocate bandwidth among multiple queues.
● It is a variation of the round-robin discipline, where each queue is

ir
allocated a weight proportional to its priority level or the desired
amount of bandwidth.
● In a WRR service discipline, each queue is assigned a weight that
es
determines the fraction of bandwidth that should be allocated to
that queue.
● The scheduler then selects packets from each queue in a
round-robin fashion, taking into account the weight assigned to
D
each queue.

For Example:
● Consider a switched network with two queues.
● Queue 1 is assigned a weight of 3, and Queue 2 is assigned a weight
u

of 1.
● This means that Queue 1 should receive three times as much
Ed

bandwidth as Queue 2.
● The scheduler would select 3 packets from Queue 1 for every 1
packet selected from Queue 2.

WRR is often used in situations where multiple queues need to share a


limited amount of bandwidth, such as in a network with multiple classes
of service or in a network with limited bandwidth. By allocating
bandwidth based on the weight of each queue, WRR ensures that each
queue receives a fair share of the available bandwidth while also allowing
higher-priority queues to receive a larger share if needed.

YouTube Facebook Instagram Telegram


44
Overall, the WRR service discipline is an effective way to allocate
bandwidth among multiple queues in a switched network, ensuring that
packets are transmitted in a fair and efficient manner.

Medium Access Control Protocols for Broadcast Networks:


● Medium Access Control (MAC) protocols are used to control access
to the shared communication medium in broadcast networks.
● In broadcast networks, multiple devices share a common
communication medium, and MAC protocols are necessary to avoid
collisions and ensure efficient use of the shared medium.

e
Here are some examples of MAC protocols commonly used in
broadcast networks:

ir
1. Carrier Sense Multiple Access/Collision Detection (CSMA/CD):
● This is a popular MAC protocol used in wired networks, such as
Ethernet.
es
● In CSMA/CD, a device listens to the communication medium before
transmitting to ensure that it is not already in use.
● If the medium is idle, the device transmits its packet.
D
● If there is a collision with another packet, the devices involved
back off for a random time before attempting to retransmit.

2. Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA):


u

● This MAC protocol is commonly used in wireless networks, such as


Wi-Fi.
● In CSMA/CA, devices use a random backoff time before attempting
Ed

to transmit to avoid collisions.


● Devices also use a request-to-send (RTS)/clear-to-send (CTS)
mechanism to reserve the communication medium before
transmitting, reducing the chance of collisions.

3. Token Passing:
● This MAC protocol uses a token that is passed from device to
device in a predetermined order.
● The device holding the token is allowed to transmit its packet.
● Once the transmission is complete, the token is passed to the next
device in the predetermined order.

YouTube Facebook Instagram Telegram


45
4. Reservation-Based MAC protocols:
● These MAC protocols allow devices to reserve the communication
medium in advance, reducing the chance of collisions.
● In reservation-based protocols, devices send a request to reserve
the communication medium before transmitting.
● The reservation is granted by the network, and the device can then
transmit its packet.

Internet Reservation Protocols:


● Internet Reservation Protocols (IRPs) are a family of protocols that
are used to reserve resources in computer networks.
● The goal of IRPs is to provide efficient and reliable resource

e
reservation mechanisms that can be used in various applications,
such as multimedia streaming, real-time communications, and

ir
cloud computing.
There are several types of IRPs, including:
es
1. Resource Reservation Protocol (RSVP):
● RSVP is a signalling protocol used to reserve network resources for
specific flows.
● It is commonly used in real-time multimedia applications and is
D
designed to provide QoS guarantees for such applications.

2. Common Open Policy Service (COPS):


● COPS is a protocol that allows network devices to exchange
u

information about resource availability and to make resource


reservations.
● COPS can be used to provide QoS guarantees for various
Ed

applications, including multimedia streaming and real-time


communications.

3. Next Steps in Signaling (NSIS):


● NSIS is a protocol suite that provides a framework for signalling
and resource reservation in IP networks.
● NSIS includes several protocols, including the NSIS signalling
protocol (NSLP) and the NSIS transport layer protocol (NTLP).
● NSIS is designed to provide QoS guarantees for various
applications, including multimedia streaming and real-time
communications.

YouTube Facebook Instagram Telegram


46
4. Differentiated Services (DiffServ):
● DiffServ is a QoS mechanism that allows network devices to
prioritize traffic based on predefined service levels.
● DiffServ is often used in conjunction with other IRPs, such as RSVP,
to provide QoS guarantees for specific applications.

Overall, IRPs are an essential part of modern computer networks,


providing efficient and reliable mechanisms for resource reservation and
QoS guarantees. The choice of IRP depends on the specific application
requirements and the network topology.

e
Resource Reservation Protocols:
● Resource Reservation Protocols (RRPs) play a critical role in

ir
real-time systems, as they provide mechanisms to guarantee the
availability of system resources during the execution of real-time
tasks.
es
● RRPs are used to reserve system resources, such as CPU time,
memory, and network bandwidth, to ensure that real-time tasks
meet their timing and performance requirements.
D
Here are some examples of RRPs used in real-time systems:

1. Time Division Multiple Access (TDMA):


● TDMA is an RRP used in communication networks to allocate time
u

slots to different devices to transmit their data.


● TDMA ensures that each device has a reserved time slot during
which it can transmit its data without interference from other
Ed

devices.

2. Resource Reservation Protocol (RSVP):


● RSVP is an RRP used in IP networks to reserve network resources
for specific flows.
● RSVP is designed to provide Quality of Service (QoS) guarantees for
real-time multimedia applications by ensuring that network
resources are reserved for these applications.

3. Priority Inheritance Protocol (PIP):


● PIP is an RRP used in real-time operating systems to prevent
priority inversion, a situation where a high-priority task is blocked
by a lower-priority task holding a shared resource.
YouTube Facebook Instagram Telegram
47
● PIP ensures that a task inherits the priority of the highest-priority
task waiting for the resource it holds.

4. Earliest Deadline First (EDF):


● EDF is a scheduling algorithm used in real-time systems to
schedule tasks based on their deadline.
● EDF ensures that the task with the earliest deadline is scheduled
first, ensuring that all tasks meet their deadlines.

e
ir
es
D
Overall, RRPs are essential for ensuring the timely and reliable execution
of real-time tasks in real-time systems. The choice of RRP depends on the
u

specific requirements of the system, including the types of resources


being reserved and the timing and performance requirements of the
Ed

real-time tasks.

YouTube Facebook Instagram Telegram


48
Unit-5
Real-Time Operating Systems and Databases

Real time operating system:


● A Real-Time Operating System (RTOS) is a specialized operating
system that is designed for real-time applications with specific
timing and reliability requirements.
● An RTOS provides deterministic behavior, which means that it
guarantees that a specific task will be executed within a
predictable time frame.

e
● This predictability is essential for systems that require fast and
reliable responses to external events, such as industrial control

ir
systems, medical devices, and automotive systems.

es
D
u

Features of RTOS: RTOS, or Real-Time Operating System, is an operating


Ed

system that is designed to run real-time applications with specific timing


and reliability requirements. Here are some features of RTOS.

1. Deterministic: RTOS is deterministic, which means it is designed to


provide a predictable and consistent response to various events and
inputs.

2. Real-time scheduling:
● RTOS uses real-time scheduling algorithms to ensure that critical
tasks get executed on time.

YouTube Facebook Instagram Telegram


49
● Tasks are scheduled based on their priority, and preemption is used
to ensure that higher-priority tasks are executed before
lower-priority ones.

3. Multitasking:
● RTOS supports multitasking, which means that it can run multiple
tasks simultaneously.
● This allows the system to perform multiple functions at the same
time, without compromising the performance of any one task.

4. Interrupt handling:
● RTOS provides efficient interrupt handling, which is critical for

e
real-time systems.
● Interrupts are used to handle external events, and RTOS ensures

ir
that the system can respond to these events quickly and accurately.

5. Memory management:
es
● RTOS provides memory management features that are optimized
for real-time systems.
● It ensures that memory is allocated and released in a timely
manner, without causing delays or blocking other tasks.
D
6. Communication mechanisms:
● RTOS provides communication mechanisms that allow tasks to
communicate with each other efficiently.
u

● This includes message queues, semaphores, and shared memory.

7. Small footprint: RTOS is designed to have a small memory footprint,


Ed

which is important for embedded systems and other systems with limited
resources.

8. Scalability:
● RTOS is scalable, which means it can be used in systems of
different sizes and complexity.
● It can be used in simple embedded systems as well as complex
systems with multiple processors.

9. Portability: RTOS is designed to be portable, which means it can be


easily adapted to different hardware platforms and architectures.

YouTube Facebook Instagram Telegram


50
Time Services: Time services are an essential feature of Real-Time
Operating Systems (RTOS) that provide accurate and reliable timing
services to real-time applications.

Here are some time services that RTOS typically provide:

1. Tick timer:
● A tick timer is a periodic interrupt generated by the RTOS, which
allows the operating system to keep track of time.
● The tick timer is typically generated at a fixed interval, such as
every millisecond, and is used by the RTOS scheduler to determine

e
when to switch tasks.

ir
2. Clocks:
● An RTOS typically provides a clock function that returns the
current system time in a specific format, such as hours, minutes,
and seconds.
es
● The system time is often used by applications to timestamp events
or to schedule tasks.
D
3. Timers:
● Timers are used to schedule tasks or events to occur at a specific
time or after a specific period.
● The RTOS provides a timer function that allows an application to
u

set up a timer and specify when it should expire.

4. Delays:
Ed

● An RTOS provides a delay function that allows an application to


pause execution for a specified period.
● This is useful for applications that need to perform a periodic task,
such as sampling a sensor every second.

5. Time synchronization:
● In distributed systems, it is essential to synchronize the clocks of
different nodes to ensure that they agree on the current time.
● RTOS often provides time synchronization services that allow
nodes to synchronize their clocks with a master clock.

YouTube Facebook Instagram Telegram


51
UNIX as RTOS: UNIX was not designed as a Real-Time Operating System
(RTOS) initially, but it has been used for real-time applications in some
cases. However, using UNIX as an RTOS has some limitations and
challenges.

Here are some factors to consider:

1. Non-deterministic behaviour:
● UNIX was not designed with real-time requirements in mind, and it
does not provide deterministic behaviour.
● UNIX uses a time-sharing model, where processes are scheduled
based on their priority and how long they have been waiting.

e
● This makes it difficult to guarantee that a task will be executed
within a specific time frame.

ir
2. Interrupt handling:
● UNIX's interrupt handling mechanisms are not optimized for
es
real-time applications.
● UNIX uses interrupt handlers, which can take a significant amount
of time to execute, and this can affect the response time of the
system.
D
3. Memory management:
● UNIX's memory management system is not optimized for real-time
applications.
u

● UNIX uses virtual memory, which can lead to page faults and delays
in memory access.
Ed

4. Large footprint: UNIX is a complex operating system with a large


memory footprint, which can be a challenge for embedded systems with
limited resources.

5. Limited real-time support:


● While some real-time extensions are available for UNIX, they may
not provide the same level of support as a dedicated RTOS.
● Also, not all UNIX variants support real-time extensions.

YouTube Facebook Instagram Telegram


52
POSIX(Portable Operating System Interface) Issues:
● The POSIX (Portable Operating System Interface) standard defines
a set of APIs for Unix-like operating systems, which includes
real-time extensions (known as POSIX.1b).
● While POSIX aims to provide portability and compatibility across
different operating systems, there are some issues with using
POSIX in RTOS environments.

Here are some of the main issues:

1. Compliance:
● While most modern RTOSes have some level of POSIX compliance,

e
not all of them support the full POSIX standard.
● This can be an issue if an application relies on specific POSIX

ir
features that are not supported by the RTOS.

2. Overhead:
es
● The POSIX standard is designed to be portable and generic, which
means that it can introduce overhead in RTOS environments.
● This can be problematic for real-time applications that require low
latency and fast response times.
D
3. Real-time scheduling:
● The POSIX standard defines a set of scheduling policies, such as
Round-Robin, FIFO, and Priority-Based, which may not be
u

optimized for real-time applications.


● RTOSes may have their own scheduling algorithms that are better
suited to real-time systems.
Ed

4. File I/O:
● The POSIX standard defines a set of APIs for file I/O, but these APIs
may not be suitable for real-time systems.
For example, file I/O operations can introduce unpredictable delays in
real-time systems.

5. Memory allocation:
● The POSIX standard defines a set of memory allocation functions,
but these functions may not be optimized for real-time systems.
● In real-time systems, memory allocation can introduce delays and
may need to be managed more efficiently.

YouTube Facebook Instagram Telegram


53
Remark:
● POSIX can be useful for ensuring portability between different
operating systems, but it can also present challenges when
implementing it in an RTOS.
● RTOS vendors need to carefully consider the overhead and
suitability of the POSIX standard for real-time systems and may
need to provide their own optimized APIs and scheduling
algorithms to meet the specific requirements of real-time
applications.

e
Characteristic of Temporal data:
● Temporal data refers to data that is time-sensitive, such as data

ir
that needs to be processed and responded to within a specific time
frame.
● Real-Time Operating Systems (RTOS) are designed to handle
es
temporal data efficiently, and they have some specific
characteristics that make them well-suited to this task.

Here are some of the key characteristics of RTOS that enable efficient
D
processing of temporal data.

1. Deterministic scheduling:
● RTOS provides deterministic scheduling, which means that tasks
u

are scheduled to run at specific times, and their execution time is


predictable.
● This allows applications to meet strict timing requirements.
Ed

2. Low latency:
● RTOS has low latency, which means that it can respond quickly to
external events.
● This is important for applications that need to process data in real
time, such as control systems or data acquisition systems.

3. Real-time event handling:


● RTOS can handle real-time events, such as interrupts or sensor
data, with high accuracy and precision.
● This is important for applications that need to respond to external
events quickly and accurately.

YouTube Facebook Instagram Telegram


54
4. Priority-based scheduling:
● RTOS uses priority-based scheduling, which allows applications to
assign different priorities to different tasks.
● This enables applications to manage critical tasks efficiently and
ensure that they are executed on time.

5. Preemption:
● RTOS supports preemption, which means that higher-priority tasks
can interrupt lower-priority tasks if necessary.
● This is important for ensuring that critical tasks are executed on
time, even if non-critical tasks are currently running.

e
Temporal Consistency:

ir
● Temporal consistency refers to the property of data being
consistent over time.
● In other words, if data is consistent at one point in time, it should
es
remain consistent at all subsequent points in time.
● Temporal consistency is particularly important in real-time
systems, where data needs to be processed and responded to
quickly and accurately.
D
● To ensure temporal consistency, real-time systems use techniques
such as time-stamping and synchronization.
● Time-stamping involves adding a timestamp to each piece of data,
which allows the system to track when the data was generated and
when it needs to be processed.
u

● Synchronization involves coordinating the timing of different tasks


to ensure that they are executed at the appropriate times.
Ed

Real-time systems also use mechanisms such as locks, semaphores, and


message passing to ensure that data is accessed and updated in a
consistent manner.
For example, locks can be used to ensure that only one task can access a
particular resource at a time, preventing inconsistencies that could arise
if multiple tasks accessed the resource simultaneously.

Concurrency Control:
● Concurrency control is the process of managing access to shared
resources in a concurrent system, such as a database or a Real-Time
Operating System (RTOS).

YouTube Facebook Instagram Telegram


55
● Concurrency occurs when multiple processes or threads access the
same shared resource simultaneously, which can lead to data
inconsistencies or conflicts.
● The goal of concurrency control is to ensure that multiple
processes or threads can access shared resources in a controlled
and coordinated manner, without causing conflicts or
inconsistencies.

Concurrency control mechanisms can include:


1. Locking:
● Locking is a technique that prevents multiple processes or threads
from accessing a shared resource simultaneously.

e
● A lock is placed on the resource, and other processes or threads
must wait until the lock is released before accessing the resource.

ir
2. Transactions:
● Transactions are a sequence of operations that are executed as a
es
single, indivisible unit.
● Transactions provide atomicity, consistency, isolation, and
durability (ACID) properties, which ensure that multiple processes
or threads can access shared resources in a coordinated manner.
D
3. Semaphores:
● Semaphores are a mechanism for controlling access to shared
resources.
u

● They maintain a count of the number of processes or threads that


are currently accessing the resource, and allow access to the
resource only when the count is below a certain threshold.
Ed

4. Read-Write Locks:
● Read-Write Locks are a type of lock that allows multiple threads to
read a shared resource simultaneously, but only one thread to write
to the resource at a time.
● This can improve performance by allowing multiple readers, but
still ensuring data consistency.

Here's an example of concurrency control in a simple banking


system:
Assume that there are two users, User A and User B, who have separate
accounts in a bank. Both users want to transfer some money from their
YouTube Facebook Instagram Telegram
56
accounts to a third-party account at the same time. Here's how
concurrency control would be applied in this scenario:

Locking:
● When a user initiates a transaction, the system should first acquire
a lock on the user's account to prevent other users from modifying
it concurrently.
● In this case, User A's account and User B's account would be locked
before the transactions.
Execution:
● The transactions are executed in sequence to avoid conflicts.
● In this case, the transaction initiated by User A would be executed

e
first, followed by the transaction initiated by User B.
Unlocking:

ir
● After the transactions are completed, the locks on the user
accounts are released to allow other users to access them.
● In this case, both User A's account and User B's account would be
es
unlocked after the transactions.

By using concurrency control, the bank ensures that the two users can
transfer money from their accounts to the third-party account without
D
any conflicts. If concurrency control was not used, the two transactions
might occur simultaneously and could potentially conflict with each
other, resulting in inconsistent data and potentially compromising the
integrity of the bank's data.
u

Overview of Commercial Real-Time Databases:


Ed

● Commercial real-time databases are designed to handle large


volumes of data with high throughput and low latency.
● They are used in applications such as finance, telecommunications,
and manufacturing, where real-time data processing is critical.

YouTube Facebook Instagram Telegram


57
e
ir
Here's an overview of some popular commercial real-time databases.

1. Oracle TimesTen:
es
● Oracle TimesTen is a relational in-memory database that provides
sub-millisecond response times and high availability.
● It supports SQL and PL/SQL programming languages and can be
used with Oracle databases to provide real-time data analysis and
D
reporting.

2. IBM Db2:
● IBM Db2 is a relational database management system that provides
u

real-time data analysis and high availability.


● It supports SQL and other programming languages and provides
built-in analytics and machine learning capabilities.
Ed

3. Microsoft SQL Server:


● Microsoft SQL Server is a relational database management system
that provides real-time data processing and analysis.
● It supports SQL and other programming languages and includes
features such as high availability, security, and scalability.

4. SAP HANA:
● SAP HANA is an in-memory database management system that
provides real-time data processing and analytics.
● It supports SQL and other programming languages and includes
features such as high availability, security, and scalability.

YouTube Facebook Instagram Telegram


58
5. VoltDB:
● VoltDB is an in-memory relational database management system
that provides real-time data processing and analysis.
● It supports SQL and other programming languages and includes
features such as high availability, security, and scalability.

These commercial real-time databases provide various features and


capabilities to meet the requirements of different applications. They
offer high-performance, real-time data processing, high availability,
security, scalability, and analytics capabilities to support various
business needs.

e
ir
es
D
u
Ed

YouTube Facebook Instagram Telegram


59
Previous Year Questions For 2 Marks

1. What do you mean by a real-time system?


2. Discuss issues in real-time system scenarios.
3. What is an Embedded system? Differentiate between
embedded systems and real-time systems.
4. Define TargetOS.
5. Compare an open system compare with a closed system.
6. What is the difference between hard and soft real-time
communication supported by a network?
7. Distinguish traffic shaping and policing.
8. What is meant by QoS routing?

e
9. Are all hard real-time systems usually safety‐critical in
nature?

ir
10.Scheduling decisions are made only at the arrival and
completion of tasks in a non ‐pre-emptive event‐driven task
scheduler. Justify your answer.
11.Define the term Real-Time database.
es
12.What do you understand by a real-time operating system?
13.What do you understand by real-time Communication?
14.What do you understand by Multiple unit resources?
15.Discuss the real-time Application.
D
u
Ed

Edu Desire
Computer And Technology

The More You Practice, The Better You Get.

Thank You!
Follow me

YouTube Facebook Instagram Telegram


60

You might also like