Professional Documents
Culture Documents
1680722035826real Time System 1
1680722035826real Time System 1
System
e
This pdf is only designed for B.Tech students of all Engineering Collage affiliated
ir
with Dr APJ Abdul Kalam Technical University.
This pdf provides help in the exam time for a quick revision in sorting the time.
es Compiled by
Sanjeev Yadav
D
u
Ed
Edu Desire
Computer & Technology
Follow me
Unit Topic
1 Introduction
Definition, Typical Real-Time Applications: Digital Control, High-Level
Controls, Signal Processing etc., Release Times, Deadlines, and Timing
Constraints, Hard Real-Time Systems and Soft Real-Time Systems,
Reference Models for Real-Time Systems: Processors and Resources,
Temporal Parameters of Real-Time Workload, Periodic Task Model,
Precedence Constraints and Data Dependency.
2 Real-Time Scheduling
e
Common Approaches to Real-Time Scheduling: Clock Driven Approach,
Weighted Round Robin Approach, Priority Driven Approach, Dynamic
Versus Static Systems, Optimality of EffectiveDeadlineFirst (EDF) and
ir
Least-Slack-Time-First (LST) Algorithms, Rate Monotonic Algorithm,
Offline Versus Online Scheduling, Scheduling Aperiodic and Sporadic
jobs in Priority Driven and Clock Driven Systems.
es
3 Resources Sharing
Effect of Resource Contention and Resource Access Control (RAC),
Non-preemptive Critical Sections, Basic Priority-Inheritance and
Priority-Ceiling Protocols, Stack Based Priority-Ceiling Protocol, Use of
Priority-Ceiling Protocol in Dynamic Priority Systems, Preemption
D
Ceiling Protocol, Access Control in Multiple-Unit Resources, Controlling
Concurrent Accesses to Data Objects.
4 Real-Time Communication
Basic Concepts in Real-time Communication, Soft and Hard RT
u
Definition:
● A real-time system is a type of computer system that is designed to
respond to input, process it, and provide output within a specified
time constraint.
● In other words, it is a system that must react to events and produce
results in real-time or near-real-time.
● These systems are often used in mission-critical applications, such
e
as aerospace, defence, and medical equipment, where a delay or
failure in processing could have severe consequences.
ir
es
D
u
2. Soft real-time systems have timing constraints, but they are not
as strict, and a failure to meet them may not have severe
consequences.
e
2. Power systems: Real-time systems are used to monitor and control
power grids, ensuring that electricity is delivered to homes and
ir
businesses in a reliable and efficient manner.
High-Level Control:
● Real-time systems are commonly used in high-level control
applications where a timely and accurate response to events is
critical. Here are some examples of high-level control applications
that utilize real-time systems:
1. Power grid control: Real-time systems are used in power grid control
for monitoring and controlling the flow of electricity in the grid, ensuring
that demand is met and that there is no disruption to the power supply.
2. Traffic management
3. Building automation
6. Environmental monitoring
e
● Here are some examples of typical real-time applications in signal
processing and control:
ir
1. Digital Signal Processing (DSP): Real-time systems are used in DSP
applications for audio and video processing, image and speech
es
recognition, and data compression.
temperature control.
9. Soft deadlines: Soft deadlines are less critical, and a missed deadline
e
may not have significant consequences.
For example, in an audio and video processing system, a missed deadline
ir
may result in a momentary glitch in the output, which may be
acceptable.
es
Difference between Hard and Soft Real-Time Systems:
degraded.
Here the size of a data file is The size is large in the soft
medium or small. real-time system.
These systems are not flexible. These systems are more flexible
They commonly supply than hard real-time systems. They
full-deadline submissions. can manage if the deadline is
missed.
e
examples. examples.
ir
Reference Models: Reference models for real-time systems are
important to provide a framework for designing, analyzing, and
es
implementing real-time systems.
Processor model:
● The processor model specifies the characteristics of the processor
used in the real-time system, such as its clock speed, memory size,
and instruction set architecture.
● The processor model is critical because it determines the system's
processing power and affects the system's ability to meet its
real-time requirements.
Resource model:
● The resource model specifies the resources used by the real-time
system, such as memory, I/O devices, and network interfaces.
Task model:
● The task model specifies the tasks to be performed by the real-time
system, such as data acquisition, processing, and communication.
● The task model is critical because it determines the system's
workload and affects the system's ability to meet its real-time
requirements.
Communication model:
e
● The communication model specifies the communication protocols
and mechanisms used by the real-time system, such as message
ir
passing, shared memory, and sockets.
● The communication model is essential because it affects the
system's ability to handle and process data and affects the system's
es
ability to meet its real-time requirements.
Period: The period is the time interval between the start of two
consecutive instances of a periodic task. The period is critical because it
determines the system's workload and the timing of the tasks.
e
● The periodic task model specifies the following parameters.
ir
Period: The period is the time interval between the start of two
consecutive instances of a periodic task.
es
Deadline: The deadline is the time by which a periodic task must be
completed.
Release time: The release time is the time when a periodic task is first
available for execution.
For example: Consider the task Ti with period = 5 and execution time = 3
Phase is not given so, assume the release time of the first job as zero. So
the job of this task is first released at t = 0, then it executes for 3s, and
then the next job is released at t = 5, which executes for 3s, and the next
job is released at t = 10. So jobs are released at t = 5k where k = 0, 1. . . N
e
times starts to repeat.
ir
Precedence Constraints:
● Precedence constraints in a real-time system are used to specify
the order in which tasks must be executed to meet the system's
es
timing requirements.
● Precedence constraints are typically represented as directed
acyclic graphs (DAGs), where nodes represent tasks, and edges
represent the order in which tasks must be executed.
D
Precedence constraints can be classified as hard or soft constraints.
For example: Consider task T having 5 jobs J1, J2, J3, J4, and J5, such that
J2 and J5 cannot begin their execution until J1 completes and there are no
other constraints. The precedence constraints for this example are:
e
ir
es
Consider another example where a precedence graph is given, and
you have to find precedence constraints.
D
u
J1< J2
J2< J3
J2< J4
J3< J4
Data Dependency:
● Data dependency refers to the relationship between different tasks
or processes in a real-time system that depends on the availability
of data.
● In a real-time system, data dependency is critical because the
availability of data affects the timing and completion of tasks.
2. Data dependency:
e
● Data dependency refers to the relationship between tasks or
processes where the output of one task or process is used as the
ir
input for another task or process.
● Data dependency can be either read-only or read-write.
● Read-only data dependency means that the output of one task or
es
process is read-only, and it does not affect the input of another
task or process.
● Read-write data dependency means that the output of one task or
process is written to memory and used as input for another task or
process.
D
Example: Consider a real-time system that controls the temperature of a
chemical reactor. The system consists of two tasks: a
temperature-sensing task and a temperature-control task. The
u
e
1. Hard real-time scheduling requires that tasks are completed
ir
within a specific deadline, and any delay can result in system
failure or loss of data.
2. Soft real-time scheduling, on the other hand, allows for some
degree of delay but still requires that tasks are completed within a
es
reasonable timeframe.
1. Clock-Driven Approach:
● The Clock Driven approach is a real-time scheduling technique that
Ed
uses a fixed clock to divide time into equal intervals and assigns
tasks to each interval based on their deadline and priority.
● In this approach, tasks are assigned to a fixed time slot, and the
scheduler ensures that each task is executed within its allocated
time slot.
● If a task misses its deadline, it is either rescheduled or dropped,
depending on the criticality of the task.
● The Clock Driven approach is helpful in systems that have a high
degree of predictability and where tasks have fixed execution
times.
● It is commonly used in embedded systems, where tasks are
executed in a deterministic and predictable manner.
0 ms
e
10 ms A
ir
20 ms B
30 ms es C
40 ms A
50 ms B
D
60 ms C
70 ms A
80 ms B
u
90 ms C
100 ms A
Ed
In this example,
● The system clock is divided into time slots of 10 ms, and three
tasks, A, B, and C, are scheduled to run.
● Task A is assigned the first time slot, task B is assigned the second
time slot, and task C is assigned the third time slot.
● At time 0 ms, no tasks are running.
● At time 10 ms, task A starts running and continues until the end of
its time slot at 20 ms.
● At time 20 ms, task B starts running and continues until the end of
its time slot at 30 ms.
e
Here is an example diagram of the weighted round-robin approach:
ir
Task A: weight 3, requires 15 ms to complete
Task B: weight 2, requires 10 ms to complete
es
Task C: weight 1, requires 5 ms to complete
Time quantum: 5 ms
D
Execution Time (C) Task A Task B Task C
0 - 5 ms A
5 - 10 ms B
u
10 -15 ms 15
Ed
15 -20 ms A -
20 -25 ms B -
25 -30 ms A - -
● At time 0 ms, the scheduler starts with Task A since it has the
highest weight.
● Task A is executed for the first time quantum of 5 ms, until time 5
ms.
● Since Task A has not been completed, it is moved to the end of the
queue, and the scheduler switches to Task B, which is executed for
the next time quantum of 5 ms, until time 10 ms.
YouTube Facebook Instagram Telegram
15
● Since Task B has not been completed, it is moved to the end of the
queue, and the scheduler switches to Task C, which is executed for
the next time quantum of 5 ms, until 15 ms.
● Since Task C has been completed, it is removed from the schedule
and the scheduler switches back to Task A, which is executed for
the next time quantum of 5 ms, until time 20 ms.
● Since Task A has not been completed, it is moved to the end of the
queue, and the scheduler switches back to Task B, which is
executed for the next time quantum of 5 ms, until time 25 ms.
● Since Task B has been completed, it is removed from the schedule,
and the scheduler switches back to Task A, which is executed for
the final time quantum of 5 ms, until time 30 ms.
e
ir
3. Priority-Driven Approach:
● The priority-driven approach is a real-time scheduling algorithm
that assigns a priority to each task, based on its importance or
es
urgency.
● The scheduler then schedules tasks based on their priority, with
higher-priority tasks being executed before lower-priority tasks.
D
Here is an example of the priority-driven approach:
20 ms 15 ms 10 ms
● At time 0 ms, the scheduler starts with Task A since it has the
highest priority.
● Task A is executed until it completes at 20 ms.
● Since Task A has been completed, the scheduler switches to Task B,
which is the next highest priority task.
● Task B is executed until it completes at 35 ms.
● Finally, the scheduler switches to Task C, which is the
lowest-priority task.
e
advance
ir
Scheduling High, due to the need to Low, since the schedule is
complexity respond to runtime fixed
changes
es
Resource Resources are allocated Resources can be
allocation dynamically based on the pre-allocated based on
current workload or known requirements
demand
D
Predictability Less predictable, due to More predictable since all
runtime changes in task requirements are known
arrival and resource in advance
demands
u
changing demands
e
Task B: deadline 3 ms, requires 5 ms to complete
Task C: deadline 7 ms, requires 15 ms to complete
ir
Execution Time (C) es Task A Task B Task C
0 - 5 ms B
5 - 15 ms A
15 - 30 ms C
D
● At time 0 ms, the scheduler starts with Task B since it has the
earliest deadline.
● Task B is executed until it completes at 5 ms.
u
e
Remark: If a task's slack time is negative, it means that the task has
ir
already exceeded its deadline, and it should be scheduled immediately to
avoid missing the deadline. In this case, LST would prioritize the task
with the smallest negative slack time.
es
Here's an example of how the LST algorithm works:
0 - 15 ms B
Ed
15 - 25 ms C
25 - 45 ms A
e
deadlines. remaining until their
deadlines.
ir
Type of Preemptive Preemptive
scheduling es
Algorithm Moderate Moderate
complexity
Selection Tasks with the earliest Tasks with the least slack
u
priority.
0 - 15 ms B
15 - 25 ms C
e
25 - 45 ms A
ir
● At time 0 ms, all tasks are ready to be executed, so the scheduler
selects the task with the highest priority, which is Task B.
● Task B is executed until it completes at 15 ms.
es
● At 15 ms, the scheduler checks which task is ready to be executed
next.
● Since Task C has a period of 40 ms, it is prepared to be executed
next.
D
● Task C is executed until it completes at 25 ms.
● At 25 ms, the scheduler checks which task is prepared to be
executed next.
● Since Task A has a period of 50 ms, it is prepared to be executed
next.
u
of priority.
e
unexpected events or changes in and adaptable to changes, but it
task requirements that occur may not be able to optimize the
ir
during the execution. schedule as effectively as offline
scheduling
es
Difference between Sporadic and Aperiodic Real-time Tasks:
D
SPORADIC TASK Aperiodic Task
e
Scheduling Aperiodic and Sporadic jobs in Priority Driven:
ir
● In priority-driven scheduling, tasks are assigned priorities based
on their importance and urgency.
● The scheduler then selects the task with the highest priority to
execute next.
es
● Aperiodic and sporadic jobs are two types of tasks that can be
scheduled in a priority-driven system.
system events.
Ed
e
6. Once task B is complete, the scheduler will return to executing task
A.
ir
7. If task C arrives at a time of 200 ms, the scheduler will execute it
since it is the highest-priority task at that time.
8. If task C is not complete before its deadline, it will be considered a
missed deadline.
es
In this way, priority-driven scheduling can handle aperiodic and
sporadic tasks by assigning priorities based on their timing requirements
and ensuring they meet their deadlines.
D
Scheduling Aperiodic and Sporadic jobs in Clock-Driven Systems:
● Clock-driven scheduling systems are used in real-time systems to
u
e
ir
es
D
u
Ed
e
ir
es
D
Effect of Resource Contention:
● Resource contention occurs when multiple processes or threads
compete for the same limited resources, such as CPU time, memory,
disk I/O, or network bandwidth.
u
For Example:
● Imagine a computer program that needs to read and write data to a
shared file on a disk.
● If multiple instances of the program are running simultaneously,
they may all try to access the file at the same time, causing resource
contention.
● This could lead to delays in reading or writing the file, or even data
corruption if the program needs to handle concurrent access
correctly.
e
For Example:
● In a real-time operating system, RAC can be used to control access
ir
to the CPU by assigning different priorities to processes based on
their criticality.
● This ensures that processes with higher priority are given access to
es
the CPU first and that they are not delayed by lower-priority
processes.
e
section, it cannot be preempted by another process until it has
completed the critical section and released the shared resources.
ir
Here's a simple example to illustrate non-preemptive critical
sections: es
● Consider a real-time system with two processes, A and B, that both
need to access a shared resource.
● If process A enters the critical section first, it will hold the resource
until it completes the section and releases the resource.
D
● During this time, process B cannot enter the critical section, and
must wait until process A has released the resource.
● If the system is non-preemptive, then even if process B has a higher
priority than process A, it cannot preempt process A and enter the
critical section until it has been released by process A.
u
Basic Priority-Inheritance:
● In a real-time system, priority inheritance is a synchronization
mechanism that helps to prevent priority inversion and ensure that
critical sections are executed in a timely and efficient manner.
● Priority inversion occurs when a high-priority task is blocked by a
lower-priority task that is holding a shared resource, such as a
semaphore or a mutex.
● This can lead to delays or missed deadlines and can have serious
consequences in safety-critical systems.
e
● However, if task C needs to access the semaphore while task A is
ir
holding it, it will be blocked and unable to proceed.
● This can cause a priority inversion, as task A is holding the
semaphore and has a higher priority than task C, but task C cannot
es
proceed until it is released.
executed efficiently.
However, if task B has a higher priority than the priority of the resource,
it can preempt task A and block it, causing a priority inversion.
● When task A enters the critical section and acquires the resource, it
raises its priority to the priority ceiling of the resource.
e
● This ensures that task B cannot preempt task A while it is holding
the resource, preventing priority inversion.
ir
Here's a diagram that shows how the PCP works:
Time
es
High Priority Task Low Priority Task Shared Resource
0 Acquires Resource
D
1 Tries to acquire
Resource
2 Blocks (Priority
u
Ceiling)
3 Uses Resource
Ed
4 Releases Resource
5 Acquires Resource
6 Blocks (Priority
Ceiling)
7 Uses Resource
8 Releases Resource
e
ir
es
D
u
Ed
e
● As a result, task C will block until task B releases resource 2.
ir
When a task releases a shared resource, it pops the priority ceiling of the
resource from the stack, and its priority is lowered to the new top of the
stack. This ensures that the priority of the task is lowered to the highest
priority ceiling of any remaining shared resources it holds.
es
Dynamic Priority Systems: In real-time systems, tasks are executed
with strict timing requirements. Dynamic priority systems are a common
scheduling technique used in real-time systems to manage the execution
D
of tasks.
● Tasks with higher priority values are executed before tasks with
lower priority values.
● This means that the system will always prioritize the most critical
tasks first, ensuring that they meet their timing requirements.
1. Task deadline - tasks with closer deadlines are given higher priority.
2. Task importance - tasks that are more critical to the system's
operation are given higher priority.
e
● When a task requests a shared resource, its priority is temporarily
raised to the priority ceiling of the resource.
ir
● This ensures that if a lower-priority task holds the resource, it
cannot block a higher-priority task that requires the resource.
For Example:
es
● Suppose there are two tasks, T1 and T2, with dynamic priorities,
and a shared resource R.
● Task T1 has a higher priority than Task T2.
D
● Without the Priority Ceiling Protocol, T2 could potentially hold the
resource R, preventing T1 from executing and meeting its timing
requirements.
However, with the Priority Ceiling Protocol, the priority ceiling of the
u
For Example:
● Consider a real-time system with two tasks, T1 and T2.
● Task T1 has a higher priority than Task T2, and they both require
access to a shared resource R, which has a preemption ceiling
priority equal to the priority of T1.
● If T2 attempts to access resource R while T1 is currently holding it,
T2's priority is raised to that of T1.
● This means that T2 can preempt T1 and access resource R, ensuring
e
that the system meets its timing requirements.
ir
Access Control in Multiple-Unit Resources: Access control in
multiple-unit resources refers to the management of shared resources
that have more than one unit or instance.
es
Examples of such resources include printers, disk drives, and
communication channels.
Locking: In locking, a task can lock one or more units of the resource
before accessing them. Once a unit is locked, no other task can access it
u
e
granted access to the resource units.
ir
Access control for multiple-unit resources often involves the use of
specialized algorithms and protocols to address these issues.
For example, the Spooling protocol is commonly used for print spooling,
es
where multiple print jobs are queued and allocated to available printers.
1. Mutual Exclusion:
● Mutual exclusion is a technique that ensures only one task or
process can access a data object at a time.
● The most common way to implement mutual exclusion is through
the use of locks.
2. Semaphores:
● A semaphore is a synchronization technique that allows multiple
tasks to access a data object simultaneously up to a certain limit.
● A semaphore maintains a count of the number of tasks currently
accessing the object and restricts access to a specified number of
tasks at a time.
3. Monitors:
e
● A monitor is an abstract data type that encapsulates data and
methods that manipulate the data.
ir
● A monitor ensures that only one task at a time can execute the
methods of the monitor, thereby ensuring mutual exclusion.
4. Read-Write Locks:
es
● Read-Write Locks are a synchronization technique that allows
multiple tasks to read the data object simultaneously while
preventing simultaneous writes.
D
● When a task wants to read the data object, it acquires a read lock
on the object, which allows multiple readers to access the data.
● However, when a task wants to write to the data object, it acquires
a write lock, which prevents other tasks from reading or writing to
the data object.
u
Ed
e
● Latency refers to the delay between when data is sent and when it
is received.
ir
● In real-time communication, low latency is crucial to ensure that
communication is smooth and uninterrupted.
2. Bandwidth:
es
● Bandwidth refers to the amount of data that can be transmitted
over a network at any given time.
● The higher the bandwidth, the more data can be transmitted, which
D
is important in real-time communication for maintaining a good
quality of service.
3. Packet loss:
● Packet loss refers to the loss of data packets during transmission.
u
less smooth.
4. Jitter:
● Jitter is the variation in latency over time.
● In real-time communication, jitter can cause delays and
interruptions, which can lead to a poor user experience.
e
● Soft real-time communication systems are those that have timing
requirements that are not as strict as hard real-time systems.
ir
● They allow some flexibility in meeting timing deadlines, and
missing a deadline may not result in catastrophic consequences.
● Here are some basic concepts of soft real-time communication
systems.
es
1. Timing Constraints:
● Soft real-time communication systems have timing constraints that
D
must be met to ensure optimal performance.
● However, there is some flexibility in meeting these timing
constraints, and a missed deadline is not critical.
● This is in contrast to hard real-time systems, where a missed
deadline can have serious consequences.
u
3. Data Compression:
● Soft real-time communication systems often use data compression
techniques to minimize the amount of data that needs to be
transmitted.
● This helps to reduce bandwidth requirements and improve the
overall performance of the system.
5. Network Protocols:
● Soft real-time communication systems rely on network protocols to
ensure that data is transmitted reliably and efficiently over the
network.
● Some common protocols used in soft real-time communication
systems include RTP (Real-time Transport Protocol) for audio and
e
video streaming and SIP (Session Initiation Protocol) for voice and
video calls.
ir
6. Resource Allocation:
● Soft real-time communication systems must allocate resources,
es
such as CPU time and memory, efficiently to ensure that the system
meets its timing and QoS requirements.
● This may involve using specialized hardware or software to
optimize performance.
D
Basic concepts of Hard Real-time communication (RT) systems:
● Hard real-time communication (RT) systems are those that have
u
systems:
1. Timing Constraints:
● Hard real-time communication systems have strict timing
constraints that must be met to ensure optimal performance.
● A missed deadline can result in serious consequences such as
system crashes, data loss, or physical harm to users.
2. Predictability:
● Hard real-time communication systems are designed to be
predictable and deterministic, with known worst-case execution
times for all operations.
3. Resource Reservation:
● In hard real-time communication systems, resources such as CPU
time, memory, and network bandwidth are often reserved in
advance to ensure that they are available when needed.
● This may involve using specialized hardware or software to allocate
resources efficiently.
4. Fault Tolerance:
● Hard real-time communication systems often incorporate
e
fault-tolerance mechanisms to ensure that the system continues to
operate correctly in the event of hardware or software failures.
ir
● This may involve redundancy, error detection and correction, or
failover mechanisms.
es
5. Priority-Based Scheduling:
● Priority-based scheduling is often used in hard real-time
communication systems to ensure that tasks are executed in the
correct order and with the necessary timing constraints.
D
● Higher-priority tasks are executed before lower-priority tasks, and
tasks with critical timing requirements are given the highest
priority.
u
e
ir
es
D
1. Sender:
● The sender is the entity that initiates the communication by
sending data to the receiver.
● In RTC, the sender may be a person using a device such as a phone,
u
2. Network:
● The network refers to the infrastructure that carries the data
between the sender and the receiver.
● This may include wired or wireless networks, such as the internet,
cellular networks, or local area networks.
3. Protocol:
● The protocol is the set of rules that governs how the data is
transmitted between the sender and the receiver.
● In RTC, the protocol must be designed to handle real-time
constraints, such as low latency and low packet loss, to ensure that
the communication is timely and efficient.
5. Media:
● The media refers to the type of data being transmitted between the
sender and receiver.
● This may include audio, video, or other types of data.
6. Codec:
e
● The codec is the software or hardware that compresses and
decompresses the media to reduce the amount of data that needs to
be transmitted.
ir
● In RTC, codecs are designed to be efficient and fast to minimise
latency and ensure real-time performance.
7. Application:
es
● The application is the software or system that enables real-time
communication between the sender and receiver.
● This may include communication platforms, video conferencing
D
systems, or messaging applications.
real-time performance, with the receiver receiving the data and using an
application to enable communication. This model is essential for many
applications, such as video conferencing, online gaming, and real-time
Ed
e
switched networks to allocate bandwidth among multiple queues.
● It is a variation of the round-robin discipline, where each queue is
ir
allocated a weight proportional to its priority level or the desired
amount of bandwidth.
● In a WRR service discipline, each queue is assigned a weight that
es
determines the fraction of bandwidth that should be allocated to
that queue.
● The scheduler then selects packets from each queue in a
round-robin fashion, taking into account the weight assigned to
D
each queue.
For Example:
● Consider a switched network with two queues.
● Queue 1 is assigned a weight of 3, and Queue 2 is assigned a weight
u
of 1.
● This means that Queue 1 should receive three times as much
Ed
bandwidth as Queue 2.
● The scheduler would select 3 packets from Queue 1 for every 1
packet selected from Queue 2.
e
Here are some examples of MAC protocols commonly used in
broadcast networks:
ir
1. Carrier Sense Multiple Access/Collision Detection (CSMA/CD):
● This is a popular MAC protocol used in wired networks, such as
Ethernet.
es
● In CSMA/CD, a device listens to the communication medium before
transmitting to ensure that it is not already in use.
● If the medium is idle, the device transmits its packet.
D
● If there is a collision with another packet, the devices involved
back off for a random time before attempting to retransmit.
3. Token Passing:
● This MAC protocol uses a token that is passed from device to
device in a predetermined order.
● The device holding the token is allowed to transmit its packet.
● Once the transmission is complete, the token is passed to the next
device in the predetermined order.
e
reservation mechanisms that can be used in various applications,
such as multimedia streaming, real-time communications, and
ir
cloud computing.
There are several types of IRPs, including:
es
1. Resource Reservation Protocol (RSVP):
● RSVP is a signalling protocol used to reserve network resources for
specific flows.
● It is commonly used in real-time multimedia applications and is
D
designed to provide QoS guarantees for such applications.
e
Resource Reservation Protocols:
● Resource Reservation Protocols (RRPs) play a critical role in
ir
real-time systems, as they provide mechanisms to guarantee the
availability of system resources during the execution of real-time
tasks.
es
● RRPs are used to reserve system resources, such as CPU time,
memory, and network bandwidth, to ensure that real-time tasks
meet their timing and performance requirements.
D
Here are some examples of RRPs used in real-time systems:
devices.
e
ir
es
D
Overall, RRPs are essential for ensuring the timely and reliable execution
of real-time tasks in real-time systems. The choice of RRP depends on the
u
real-time tasks.
e
● This predictability is essential for systems that require fast and
reliable responses to external events, such as industrial control
ir
systems, medical devices, and automotive systems.
es
D
u
2. Real-time scheduling:
● RTOS uses real-time scheduling algorithms to ensure that critical
tasks get executed on time.
3. Multitasking:
● RTOS supports multitasking, which means that it can run multiple
tasks simultaneously.
● This allows the system to perform multiple functions at the same
time, without compromising the performance of any one task.
4. Interrupt handling:
● RTOS provides efficient interrupt handling, which is critical for
e
real-time systems.
● Interrupts are used to handle external events, and RTOS ensures
ir
that the system can respond to these events quickly and accurately.
5. Memory management:
es
● RTOS provides memory management features that are optimized
for real-time systems.
● It ensures that memory is allocated and released in a timely
manner, without causing delays or blocking other tasks.
D
6. Communication mechanisms:
● RTOS provides communication mechanisms that allow tasks to
communicate with each other efficiently.
u
which is important for embedded systems and other systems with limited
resources.
8. Scalability:
● RTOS is scalable, which means it can be used in systems of
different sizes and complexity.
● It can be used in simple embedded systems as well as complex
systems with multiple processors.
1. Tick timer:
● A tick timer is a periodic interrupt generated by the RTOS, which
allows the operating system to keep track of time.
● The tick timer is typically generated at a fixed interval, such as
every millisecond, and is used by the RTOS scheduler to determine
e
when to switch tasks.
ir
2. Clocks:
● An RTOS typically provides a clock function that returns the
current system time in a specific format, such as hours, minutes,
and seconds.
es
● The system time is often used by applications to timestamp events
or to schedule tasks.
D
3. Timers:
● Timers are used to schedule tasks or events to occur at a specific
time or after a specific period.
● The RTOS provides a timer function that allows an application to
u
4. Delays:
Ed
5. Time synchronization:
● In distributed systems, it is essential to synchronize the clocks of
different nodes to ensure that they agree on the current time.
● RTOS often provides time synchronization services that allow
nodes to synchronize their clocks with a master clock.
1. Non-deterministic behaviour:
● UNIX was not designed with real-time requirements in mind, and it
does not provide deterministic behaviour.
● UNIX uses a time-sharing model, where processes are scheduled
based on their priority and how long they have been waiting.
e
● This makes it difficult to guarantee that a task will be executed
within a specific time frame.
ir
2. Interrupt handling:
● UNIX's interrupt handling mechanisms are not optimized for
es
real-time applications.
● UNIX uses interrupt handlers, which can take a significant amount
of time to execute, and this can affect the response time of the
system.
D
3. Memory management:
● UNIX's memory management system is not optimized for real-time
applications.
u
● UNIX uses virtual memory, which can lead to page faults and delays
in memory access.
Ed
1. Compliance:
● While most modern RTOSes have some level of POSIX compliance,
e
not all of them support the full POSIX standard.
● This can be an issue if an application relies on specific POSIX
ir
features that are not supported by the RTOS.
2. Overhead:
es
● The POSIX standard is designed to be portable and generic, which
means that it can introduce overhead in RTOS environments.
● This can be problematic for real-time applications that require low
latency and fast response times.
D
3. Real-time scheduling:
● The POSIX standard defines a set of scheduling policies, such as
Round-Robin, FIFO, and Priority-Based, which may not be
u
4. File I/O:
● The POSIX standard defines a set of APIs for file I/O, but these APIs
may not be suitable for real-time systems.
For example, file I/O operations can introduce unpredictable delays in
real-time systems.
5. Memory allocation:
● The POSIX standard defines a set of memory allocation functions,
but these functions may not be optimized for real-time systems.
● In real-time systems, memory allocation can introduce delays and
may need to be managed more efficiently.
e
Characteristic of Temporal data:
● Temporal data refers to data that is time-sensitive, such as data
ir
that needs to be processed and responded to within a specific time
frame.
● Real-Time Operating Systems (RTOS) are designed to handle
es
temporal data efficiently, and they have some specific
characteristics that make them well-suited to this task.
Here are some of the key characteristics of RTOS that enable efficient
D
processing of temporal data.
1. Deterministic scheduling:
● RTOS provides deterministic scheduling, which means that tasks
u
2. Low latency:
● RTOS has low latency, which means that it can respond quickly to
external events.
● This is important for applications that need to process data in real
time, such as control systems or data acquisition systems.
5. Preemption:
● RTOS supports preemption, which means that higher-priority tasks
can interrupt lower-priority tasks if necessary.
● This is important for ensuring that critical tasks are executed on
time, even if non-critical tasks are currently running.
e
Temporal Consistency:
ir
● Temporal consistency refers to the property of data being
consistent over time.
● In other words, if data is consistent at one point in time, it should
es
remain consistent at all subsequent points in time.
● Temporal consistency is particularly important in real-time
systems, where data needs to be processed and responded to
quickly and accurately.
D
● To ensure temporal consistency, real-time systems use techniques
such as time-stamping and synchronization.
● Time-stamping involves adding a timestamp to each piece of data,
which allows the system to track when the data was generated and
when it needs to be processed.
u
Concurrency Control:
● Concurrency control is the process of managing access to shared
resources in a concurrent system, such as a database or a Real-Time
Operating System (RTOS).
e
● A lock is placed on the resource, and other processes or threads
must wait until the lock is released before accessing the resource.
ir
2. Transactions:
● Transactions are a sequence of operations that are executed as a
es
single, indivisible unit.
● Transactions provide atomicity, consistency, isolation, and
durability (ACID) properties, which ensure that multiple processes
or threads can access shared resources in a coordinated manner.
D
3. Semaphores:
● Semaphores are a mechanism for controlling access to shared
resources.
u
4. Read-Write Locks:
● Read-Write Locks are a type of lock that allows multiple threads to
read a shared resource simultaneously, but only one thread to write
to the resource at a time.
● This can improve performance by allowing multiple readers, but
still ensuring data consistency.
Locking:
● When a user initiates a transaction, the system should first acquire
a lock on the user's account to prevent other users from modifying
it concurrently.
● In this case, User A's account and User B's account would be locked
before the transactions.
Execution:
● The transactions are executed in sequence to avoid conflicts.
● In this case, the transaction initiated by User A would be executed
e
first, followed by the transaction initiated by User B.
Unlocking:
ir
● After the transactions are completed, the locks on the user
accounts are released to allow other users to access them.
● In this case, both User A's account and User B's account would be
es
unlocked after the transactions.
By using concurrency control, the bank ensures that the two users can
transfer money from their accounts to the third-party account without
D
any conflicts. If concurrency control was not used, the two transactions
might occur simultaneously and could potentially conflict with each
other, resulting in inconsistent data and potentially compromising the
integrity of the bank's data.
u
1. Oracle TimesTen:
es
● Oracle TimesTen is a relational in-memory database that provides
sub-millisecond response times and high availability.
● It supports SQL and PL/SQL programming languages and can be
used with Oracle databases to provide real-time data analysis and
D
reporting.
2. IBM Db2:
● IBM Db2 is a relational database management system that provides
u
4. SAP HANA:
● SAP HANA is an in-memory database management system that
provides real-time data processing and analytics.
● It supports SQL and other programming languages and includes
features such as high availability, security, and scalability.
e
ir
es
D
u
Ed
e
9. Are all hard real-time systems usually safety‐critical in
nature?
ir
10.Scheduling decisions are made only at the arrival and
completion of tasks in a non ‐pre-emptive event‐driven task
scheduler. Justify your answer.
11.Define the term Real-Time database.
es
12.What do you understand by a real-time operating system?
13.What do you understand by real-time Communication?
14.What do you understand by Multiple unit resources?
15.Discuss the real-time Application.
D
u
Ed
Edu Desire
Computer And Technology
Thank You!
Follow me