RTES - Ch1-5 - Handout

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 87

Hawassa University - IOT Faculty of Informatics Department of Computer Science

Chapter 1: Introduction to Real Time and Embedded Systems


1. Overview of Embedded and Real time systems
1.1.Overview of Embedded system
An embedded system is a specialized computer system that is part of a larger system or machine.
Embedded systems can also be thought of as information processing subsystems integrated in a larger
system. As part of a larger system it largely determines its functionality. An embedded system usually
contains an embedded processor. Many appliances that have a digital interface -- microwaves, VCRs,
cars -- utilize embedded systems. Some embedded systems include an operating system. Others are very
specialized resulting in the entire logic being implemented as a single program. These systems are
embedded into some device for some specific purpose other than to provide general purpose computing.
A typical embedded system is shown in Figure 1 below.
A general definition of embedded systems is: embedded systems are computing systems with tightly
coupled hardware and software integration that are designed to perform a dedicated function. The word
embedded reflects the fact that these systems are usually an integral part of a larger system, known as the
embedding system. Multiple embedded systems can coexist in an embedding system.

Figure 1: A Typical Embedded System

1||Lecture Handout: (RTES) Chapter 1 – Prepared by Ayele S


Hawassa University - IOT Faculty of Informatics Department of Computer Science
There are over 3 billion embedded CPUs sold each year. Embedded CPUs are growing at a faster rate
than desktop processors (Figure 2). A large part of this growth is in smaller (4-, 8-, and 16-bit) CPUs and
DSPs.

Figure 2: Embedded systems dominate the microprocessor landscape


Embedded systems provide several functions (Figure 3);
 Monitor the environment; embedded systems read data from input sensors. This data is then
processed and the results displayed in some format to a user or users.
 Control the environment; embedded systems generate and transmit commands for actuators.
 Transform the information; embedded systems transform the data collected in some meaningful way,
such as data compression/decompression
Although interaction with the external world via sensors and actuators is an important aspect of
embedded systems, these systems also provide functionality specific to their applications. Embedded
systems typically execute applications such as control laws, finite state machines, and signal processing
algorithms. These systems must also detect and react to faults in both the internal computing
environment as well as the surrounding electromechanical systems.

Figure 3: Sensors and Actuators in an Embedded System

2||Lecture Handout: (RTES) Chapter 1 – Prepared by Ayele S


Hawassa University - IOT Faculty of Informatics Department of Computer Science
There are many categories of embedded systems, from communication devices to home
appliances to control systems. Examples include;
• Communication devices: modems, cellular phones, … etc.
• Home Appliances: CD player, VCR, microwave oven, … etc.
• Control Systems: Automobile anti-lock braking systems, robotics, satellite control …etc.

1.1.1.Characteristics of Embedded Systems


Embedded systems are characterized by a unique set of characteristics. Each of these characteristics
imposed a specific set of design constraints on embedded systems designers. The challenge to designing
embedded systems is to conform to the specific set of constraints for the application.

Integrated: A common characteristic of an embedded system is one that consists of communicating


processes executing on several CPUs or ASICs which are connected by communication links. The
reason for this is economy. Economical 4 or 8-bit microcontrollers may be cheaper than 32-bit
processors. Even after adding the cost of the communication links, this approach may be preferable. In
this approach, multiple processors are usually required to handle multiple time-critical tasks. Devices
under control of embedded systems may also be physically distributed.

Application dependent processor: Embedded systems are not general-purpose computers; it is for a
specific application. Many of the job characteristics are known before the software and hardware is
designed. This allows the designer to focus on the specific design constraints of a well-defined
application. As such, there is limited user reprogram ability. Some embedded systems, however, require
the flexibility of reprogram ability. Programmable DSPs* are common for such applications.

Sophisticated functionality: The hardware and software co-design model reemphasizes the
fundamental characteristic of embedded systems they are functional-specific. An embedded system is
usually built on custom hardware and software. Therefore, using this development model is both
permissible and beneficial.

Another typical characteristic of embedded systems is its method of software development, called
cross-platform development, for both system and application software. Software for an embedded
system is developed on one platform but runs on another. In this context, the platform is the combination
of hardware (such as particular type of processor), operating system, and software development tools
used for further development.

[Key: *DSP-digital signal processing*]

3||Lecture Handout: (RTES) Chapter 1 – Prepared by Ayele S


Hawassa University - IOT Faculty of Informatics Department of Computer Science
Control of physical systems: One of the main reasons for embedding a computer is to interact with
the environment. This is often done by monitoring and controlling external machinery. Embedded
computers transform the analog signals from sensors into digital form for processing. Outputs must be
transformed back to analog signal levels. When controlling physical equipment, large current loads may
need to be switched in order to operate motors and other actuators. To meet these needs, embedded
systems may need large computer circuit boards with many non-digital components. Embedded system
designers must carefully balance system tradeoffs among analog components, power, mechanical,
network, and digital hardware with corresponding software.
Small and low weight: Many embedded computers are physically located within some larger system.
The form factor for the embedded system may be dictated by aesthetics. For example, the form
factor for a missile may have to fit inside the nose of the missile. One of the challenges for embedded
systems designers is to develop non-rectangular geometries for certain solutions. Weight can also be a
critical constraint. Embedded automobile control systems, for example, must be light weight for fuel
economy. Portable CD players must be light weight for portability purposes.
Cost sensitivity: Cost is an issue in most systems, but the sensitivity to cost changes can vary
dramatically in embedded systems. This is mainly due to the effect of computer costs have on
profitability and is more a function of the proportion of cost changes compared to the total system cost.
Power management: Embedded systems have strict constraints on power. Given the portability
requirements of many embedded systems, the need to conserve power is important to maintain battery
life as long as possible. Minimization of heat production is another obvious concern for embedded
systems.

1.1.2.Modeling of embedded systems


As mentioned earlier, a typical embedded systems model responds to the environment via sensors and
control the environment using actuators. This requires embedded systems to run at the speed of the
environment. This characteristic of embedded system is called “reactive”. Reactive computation means
that the system (primarily the software component) executes in response to external events. External
events can be either periodic or aperiodic. Periodic events make it easier to schedule processing to
guarantee performance. Aperiodic events are harder to schedule. The maximum event arrival rate must
be estimated in order to accommodate worst case situations. Most embedded systems have a significant
reactive component. One of the biggest challenges for embedded system designers is performing an
accurate worst case design analysis on systems with statistical performance characteristics (e.g., cache

4||Lecture Handout: (RTES) Chapter 1 – Prepared by Ayele S


Hawassa University - IOT Faculty of Informatics Department of Computer Science
memory on a DSP or other embedded processor). Many embedded systems have a significant
requirement for real time operation in order to meet external I/O and control stability requirements.
Many real-time systems are also reactive systems.

1.1.3. Requirements for Embedded Systems


Embedded systems are unique in several ways, as described above. When designing embedded systems,
there are several categories of requirements that should be considered;
• Functional Requirements
• Temporal Requirements (Timeliness)
• Dependability Requirements
Functional Requirements: Functional requirements describe the type of processing the system will
perform. This processing varies, based on the application. Functional requirements include the following:
• Data Collection requirements • Alarm monitoring requirements
• Sensoring requirements • Direct Digital Control requirements
• Signal conditioning requirements • Actuator control requirements
Other requirement For example – Man-Machine Interaction requirement- Informing to the operator
for the current state of a controlling object. These interfaces can be as simple as a flashing LED or a
very complex GUI-based system. They include the ways that embedded systems assist the operator in
controlling the object/system.
Temporal Requirement: Embedded systems have many tasks to perform, each having its own
deadline. Temporal requirements define the stringency in which these time-based tasks must
complete.
Examples include - Minimal latency jitter and Minimal Error-detection latency.
Temporal requirements can be very tight (for example control-loops) or less stringent (for example
response time in a user interface).
Dependability Requirements: Most embedded systems also have a set of dependability
requirements. Examples of dependability requirements include: -
• Reliability; this is a complex concept that should always be considered at the system rather than the
individual component level. There are three dimensions to consider when specifying system
reliability;
 Hardware reliability; probability of a hardware component failing
 Software reliability; probability that a software component will produce an incorrect result.

5||Lecture Handout: (RTES) Chapter 1 – Prepared by Ayele S


Hawassa University - IOT Faculty of Informatics Department of Computer Science
 Operator reliability; how likely that the operator of a system will make an error.
There are several metrics used to determine system reliability;
 Probability of failure on demand; likelihood that the system will fail when a service request is
made.
 Rate of failure occurrence; frequency of occurrence with which unexpected behavior is likely to
occur.
 Mean Time to Failure; the average time between observed system failures.
 Safety; describe the critical failure modes and what types of certification are required for the
system
 Maintainability; describes constraints on the system such as type of Mean Time to Repair
(MTTR).
 Availability; the probability that the system is available for use at a given time. Availability is
measured as; Availability = MTTF / (MTTF+MTTR)
 Security; these requirements are often specified as “shall not” requirements that define
unacceptable system behavior rather than required system functionality.

1.1.4.Types of Embedded Systems


Embedded systems can be classified into different types based on performance, functional
requirements and performance of the microcontroller.
1. Based on their performance and functional requirements, classified into four:
 Standalone embedded systems
 Real time embedded systems
 Networked embedded systems
 Mobile embedded systems (see diagram below)

6|Lecture Handout: (RTES) Chapter 1 – Prepared by Ayele S.


Hawassa University - IOT Faculty of Informatics Department of Computer Science
2. Embedded Systems are classified into three types based on the performance of the
microcontroller such as:
 Small scale embedded systems
 Medium scale embedded systems
 Sophisticated embedded systems
Stand Alone Embedded Systems: this do not require a host system like a computer, it works by itself.
It takes the input from the input ports either analog or digital and processes, calculates and converts
the data and gives the resulting data through the connected device-Which either controls or drives
and displays the connected devices. Examples for the stand alone embedded systems are mp3
players, digital cameras, video game consoles, microwave ovens and temperature measurement
systems.
Real Time Embedded Systems: is defined above section, a system which gives a required output in a
particular time. These types of embedded systems follow the time deadlines for completion of a task.
Real time embedded systems are classified into two types such as soft and hard real time systems.
Networked Embedded Systems: These types of embedded systems are related to a network to access
the resources. The connected network can be LAN, WAN or the internet. The connection can be any
wired or wireless. This type of embedded system is the fastest growing area in embedded system
applications. The embedded web server is a type of system wherein all embedded devices are
connected to a web server and accessed and controlled by a web browser. Example for the LAN
networked embedded system is a home security system wherein all sensors are connected and run on
the protocol TCP/IP.
Mobile Embedded Systems: are used in portable embedded devices like cell phones, mobiles,
digital cameras, mp3 players and personal digital assistants, etc. The basic limitation of these devices
is the other resources and limitation of memory.
Small Scale Embedded Systems: These types of embedded systems are designed with a single 8 or
16-bit microcontroller that may even be activated by a battery. For developing embedded software
for small scale embedded systems, the main programming tools are an editor, assembler, cross
assembler and integrated development environment (IDE).
Medium Scale Embedded Systems: These types of embedded systems design with a single or 16 or
32 bit microcontroller, RISCs or DSPs. These types of embedded systems have both hardware and
software complexities. For developing embedded software for medium scale embedded systems, the
main programming tools are C, C++, JAVA, Visual C++, RTOS, debugger, source code engineering

7|Lecture Handout: (RTES) Chapter 1 – Prepared by Ayele S.


Hawassa University - IOT Faculty of Informatics Department of Computer Science
tool, simulator and IDE.
Sophisticated Embedded Systems: These types of embedded systems have enormous hardware and
software complexities that may need ASIPs, IPs, PLAs, scalable or configurable processors. They
are used for cutting-edge applications that need hardware and software Co-design and components
which have to assemble in the final system.

1.1.5.Applications of Embedded Systems


Embedded systems are used in different applications like automobiles, telecommunications,
smart cards, missiles, satellites, computer networking and digital consumer electronics.

Embedded System Initialization


It takes just minutes for a developer to compile and run a “Hello World” Application on a non-
embedded system. On the other hand, for an embedded developer, the task is not so trivial. It might
take days before seeing a successful result. This process can be a frustrating experience for a
developer new to embedded system development. Booting the target system, whether a third-party
evaluation board or a custom design, can be a mystery to many newcomers; Indeed, it is daunting to
pick up a programmer s reference manual for the target board and pore over tables of memory
addresses and registers or to review the hardware component interconnection diagrams, wondering
what it all means, what to do with the information (some of which makes little sense), and how to
relate the information to running an image on the target system.
Questions to resolve at this stage are: 

how to load the image onto the target system,


where in memory to load the image,

8|Lecture Handout: (RTES) Chapter 1 – Prepared by Ayele S.


Hawassa University - IOT Faculty of Informatics Department of Computer Science
how to initiate program execution, and
how the program produces recognizable output.
You answer these questions when designing embedded system, and hopefully reduce frustration by
demystifying the booting and initialization process of embedded systems.

1.1.6.The Future of Embedded Systems


Until the early 1990s, embedded systems were generally simple, autonomous devices with long
product lifecycles. In recent years, however, the embedded industry has experienced dramatic
transformation, as reported by the Gartner Group, an independent research and advisory firm, as well
as by other sources:
 Product market windows now dictate feverish six- to nine-month turnaround cycles.
 Globalization is redefining market opportunities and expanding application space.
 Connectivity is now a requirement rather than a bonus in both wired and emerging wireless
technologies.
 Electronics-based products are more complex.
 Interconnecting embedded systems are yielding new applications that are dependent on
networking infrastructures.
 The processing power of microprocessors is increasing at a rate predicted by Moore s Law,
which states that the number of transistors per integrated circuit doubles every 18 months.
If past trends give any indication of the future, then as technology evolves, embedded software will
continue to proliferate into new applications and lead to smarter classes of products. With an ever-
expanding marketplace fortified by growing consumer demand for devices that can virtually run
themselves as well as the seemingly limitless opportunities created by the Internet, embedded systems
will continue to reshape the world for years to come.

1.2.Overview of Real-Time systems


A real-time system is a system that is required to react to stimuli from the environment (including the
passage of physical time) within time intervals dictated by the environment. The Oxford dictionary
defines a real-time system as “Any system in which the time at which output is produced is
significant”. This is usually because the input corresponds to some movement in the physical world,
and the output has to relate to that same movement. The lag from input time to output time must be
sufficiently small for acceptable timeliness. Another way of thinking of real-time systems is any

9|Lecture Handout: (RTES) Chapter 1 – Prepared by Ayele S.


Hawassa University - IOT Faculty of Informatics Department of Computer Science
information processing activity or system which has to respond to externally generated input stimuli
within a finite and specified period. Generally, real-time systems are a system that maintains a
continuous timely interaction with its environment (Figure 4).
Correctness of a computation depends not only upon its results but also upon the time at which its
outputs are generated. A real-time system must satisfy bounded response time constraints or suffer
severe consequences.
If the consequences consist of a degradation of performance, but not failure, the system is referred to
as a soft real-time system (e.g. time adjusting system on computers over the network) whereas if the
consequences are system failure, the system is referred to as a hard real-time system; E.g.
emergency patient management system in hospitals is good example for hard real time system.
There are two types of real-time systems: reactive and embedded. Reactive real-time system
involves a system that has constant interaction with its environment; (E.g. a pilot controlling aircraft).
An embedded real-time system is used to control specialized hardware that is installed within a larger
system; (E.g. a microprocessor that controls the fuel-to-air mixture for automobiles).

Figure 4: a real-time systems interacts with the environment


Real time is a level of computer responsiveness that a user senses as sufficiently immediate or that
enables the computer to keep up with some external process (for example, to present visualizations of
the weather as it constantly changes). Real-time is an adjective pertaining to computers or processes
that operate in real time. Real time describes a human rather than a machine sense of time.
Examples of real-time systems include:
•Software for cruise missile •Industrial Process Control
•Heads-up cockpit display •Banking AT
•Airline reservation system
Real-time systems can also be found in many industries;
•Defense systems •Automated manufacturing systems
•Telecommunication systems •Air traffic control
•Automotive control •Satellite systems
•Signal processing systems •Electrical utilities
•Radar systems

10 | L e c t u r e H a n d o u t : ( R T E S ) C h a p t e r 1 – P r e p a r e d b y A y e l e S .
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Real-Time Event Characteristics
Real-time events fall into one of the three categories: asynchronous, synchronous, or isochronous.
• Asynchronous events: are entirely unpredictable. For example - the event that a user makes a telephone calls.
As far as the telephone company is concerned, the action of making a phone call cannot be predicted.
• Synchronous events: are predictable and occur with precise regularity if they are to occur. For example, the
audio and video in a movie take place in synchronous fashion.
• Isochronous events: occur with regularity within a given window of time. For example, audio bytes in a
distributed multimedia application must appear within a window of time when the corresponding video stream
arrives. Isochronous is a sub-class of asynchronous.
Real-time systems are different from time shared systems in several ways (Table 1)
 predictably fast response to urgent events
 High degree of schedule ability; timing requirements of the system must be satisfied at high degrees of
resource usage.
 Stability under transient overload; when the system is overloaded by events and it is impossible to meet all
deadlines, the deadlines of selected critical tasks must still be guaranteed.
Table 1 Real-time systems are fundamentally different than time shared systems
Metric Time-shared systems Real-time systems
Schedule ability; the ability of system tasks to
Capacity High throughput 1.2.1. C
meet all deadlines
Ensured worst-case latency; latency is the
Responsiveness Fast average response
worst-case response time to events
harac
Stability; under overload conditions, the system teristi
Overload Fairness can meet its important deadlines even if other
deadlines cannot be met cs of
real-time systems
Real-time systems have many special characteristics which are inherent or imposed. This section will discuss
some of these important characteristics.
2. Real-time systems must produce correct computational results, called logical or functional correctness, and
that these computations must conclude within a predefined period, called timing correctness.
3. Real-time systems also have substantial knowledge of the environment of the controlled system and the
applications running on it. This reason is one why many real-time systems are said to be deterministic,
because in those real-time systems, the response time to a detected event is bounded. The (or actions) taken
in response to an event is known a priori. A deterministic real-time system implies that component of the
system must have a deterministic behavior that contributes to the overall determinism of the system. As can
be seen, a deterministic real-time system can be less adaptable to the changing environment. The lack of

11 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
adaptability can result in a less robust system. The levels of determinism and of robustness must be balanced.
The method of balancing between the two is system and application specific.
4. Real-time systems undergo constant maintenance and enhancements during their lifetimes. They must
therefore be extensible.

1.2.2.Types of Real time Tasks


The system is subjected to real-time, i.e. response should be guaranteed within a specified timing constraint or
system should meet the specified deadline. For example flight control systems, real-time monitors, etc.
There are two types of tasks in real-time systems:

1. Periodic tasks
2. Dynamic tasks
Periodic Tasks: In periodic tasks, jobs are released at regular intervals. A periodic task is one that repeats
itself after a fixed time interval. A periodic task is denoted by four tuples:
Ti = < Φi, Pi, ei, Di >
Where,
 Φi – is the phase of the task. Phase is the release time of the first job in the task. If the phase is not
mentioned then the release time of the first job is assumed to be zero.
 Pi – is the period of the task i.e. the time interval between the release times of two consecutive jobs.
 ei – is the execution time of the task.
 Di – is the relative deadline of the task.
For example: Consider the task Ti with period = 5 and execution time = 3; Phase is not given so, assume the
release time of the first job as zero. So the job of this task is first released at t = 0 then it executes for 3s and then
the next job is released at t = 5 which executes for 3s and then the next job is released at
t = 10. So jobs are released at t = 5k where k = 0, 1, . . ., n

Hyper period of a set of periodic tasks is the least common multiple of periods of all the tasks in that set. For
example, two tasks T1 and T2 having period 4 and 5 respectively will have a hyper period,
H = lcm(p1, p2) = lcm(4, 5) = 20. The hyper period is the time after which pattern of job release times starts to
repeat.

12 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Dynamic Tasks: It is a sequential program that is invoked by the occurrence of an event. An event may be
generated by the processes external to the system or by processes internal to the system. Dynamically arriving
tasks can be categorized on their criticality and knowledge about their occurrence times. These categories are
aperiodic and sporadic tasks
Aperiodic Tasks - In this type of task, jobs are released at arbitrary time intervals i.e. randomly. Aperiodic tasks
have soft deadlines or no deadlines.
Sporadic Tasks - They are similar to aperiodic tasks i.e. they repeat at random instances. The only difference is
that sporadic tasks have hard deadlines. A sporadic task is denoted by three tuples:
Ti =(ei, gi, Di)
Where
ei – the execution time of the task.
gi – the minimum separation between the occurrence of two consecutive instances of the task.
Di – the relative deadline of the task

Jitter: Sometimes actual release time of a job is not known. Only know that ri is in a range [ ri-, ri+ ]. This
range is known as release time jitter. Here ri– is how early a job can be released and ri+ is how late a job can be
released. Only the range [ ei-, ei+ ] of the execution time of a job is known. Here ei– is the minimum amount of
time required by a job to complete its execution and ei+ is the maximum amount of time required by a job to
complete its execution.
Precedence Constraint of Jobs: Jobs in a task are independent if they can be executed in any order. If there is a
specific order in which jobs in a task have to be executed then jobs are said to have precedence constraints. For
representing precedence constraints of jobs a partial order relation < is used. This is called precedence relation. A
job Ji is a predecessor of job Jj if Ji < Jj i.e. Jj cannot begin its execution until Ji completes. Ji is an immediate
predecessor of Jj if Ji < Jj and there is no other job Jk such that
Ji < Jk < Jj. Ji and Jj are independent if neither Ji < Jj nor Jj < Ji is true.
An efficient way to represent precedence constraints is by using a directed graph G = (J, <) where J is the set of
jobs. This graph is known as the precedence graph. Jobs are represented by vertices of the graph and precedence
constraints are represented using directed edges. If there is a directed edge from Ji to Jj then it means that Ji is
the immediate predecessor of Jj. For example: Consider a task T having 5 jobs J1, J2, J3, J4, and J5 such that
J2 and J5 cannot begin their execution until J1 completes and there are no other constraints.
The precedence constraints for this example are: J1 < J2 and J1 < J5

13 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science

This is known as precedence graph


Set representation of precedence graph:
1. < (1) = { }
2. < (2) = {1}
3. < (3) = { }
4. < (4) = { }
5. < (5) = {1}

Consider another example where a precedence graph is given and you have to find precedence constraints:

Ex. drive the precedence constraints from this graph

14 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
1.2.3. Modeling timing constraints of real time system
Timing constraints is a vital attribute in real-time systems. Timing constraints decides the total correctness of the
results in real-time systems. The correctness of results in real-time system does not depend only on logical
correctness but also the result should be obtained within the time constraint. There might be several events
happening in real time system and these events are scheduled by schedulers using timing constraints.
Classification of Timing Constraints
Timing constraints associated with the real-time system is classified to identify the different types of timing
constraints in a real-time system. Timing constraints are broadly classified into two categories:
1. Performance Constraints
The constraints enforced on the response of the system are known as Performance Constraints. This basically
describes the overall performance of the system. This shows how quickly and accurately the system is
responding. It ensures that the real-time system performs satisfactorily.
2. Behavioral Constraint
The constraint enforced on the stimuli generated by the environment is known as Behavioral Constraints. This
basically describes the behavior of the environment. It ensures that the environment of a system is well behaved.
Further, the both performance and behavioral constraints are classified into three categories: Delay Constraint,
Deadline Constraint, and Duration Constraint. These are explained as following below.
1. Delay Constraint – describes the minimum time interval between occurrences of two consecutive events in
the real-time system. If an event occurs before the delay constraint, then it is called a delay violation. The
time interval between occurrences of two events should be greater than or equal to delay constraint. If D is
the actual time interval between occurrence of two events and d is the delay constraint, then D>=d.

2. Deadline Constraint – A deadline constraint describes the maximum time interval between occurrences of
two consecutive events in the real-time system. If an event occurs after the deadline constraint, then the result
of event is considered incorrect. The time interval between occurrence of two events should be less than or
equal to deadline constraint. If D is the actual time interval between occurrence of two events and d is the
deadline constraint, then D <= d.

15 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science

3. Duration Constraint – describes the duration of an event in real-time system. It describes the minimum and
maximum time period of an event. On this basis it is further classified into two types:
Minimum Duration Constraint: It describes that after the initiation of an event, it cannot stop before a certain
minimum duration.
Maximum Duration Constraint: It describes that after the starting of an event, it must end before a certain
maximum duration elapses.

Summary Points
 Real-time systems are characterized by the fact that timing correctness is just as important as functional or
logical correctness.
 The severity of the penalty incurred for not satisfying timing constraints differentiates hard real-time
systems from soft real-time systems.
 Real-time systems have a significant amount of application awareness similar to embedded systems.
 Real-time embedded systems are those embedded system with real-time behaviors.
 An embedded system is built for a specific application. As such, the hardware and software components
are highly integrated, and the development model is the hardware and software co-design model.
 Embedded systems are generally built using embedded processors.
 An embedded processor is a specialized processor, such as a DSP, that is cheaper to design and produce,
can have built-in integrated devices, is limited in functionality, produces low heat, consumes low power,
and does not necessarily have the fastest clock speed but meets the requirements of the specific
applications for which it is designed.

16 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science

Chapter 2: Embedded System Architecture


2.1. Hardware architecture for Embedded system
Typical embedded system mainly has two parts i.e., embedded hardware and embedded software.
Embedded hardware is based around microprocessors and microcontrollers, also include memory, bus,
Input/Output, Controller, whereas embedded software includes embedded operating systems, different
applications and device drivers. Basically two types of architecture i.e., Harvard architecture and Von Neumann
architecture are used in embedded systems. When data and code lie in different memory blocks, then the
architecture is referred as Harvard architecture. In case data and code lie in the same memory block, then the
architecture is referred as Von Neumann architecture. (Read for more detail about two architectures…)
Architecture of the Embedded System includes Sensor, Analog to Digital Converter, Memory, Processor, Digital
to Analog Converter, Actuators and etc.
The below figure illustrates the overview of basic architecture of embedded systems:

Figure 2. 1: Typical Architecture of Embedded System

1. Peripheral Devices in Embedded System: Embedded system communicates with the outside world via
their peripherals, such as following:


Serial Communication Interfaces (SCI) like RS-232, RS-422, RS-485, etc.
 Synchronous Serial Communication Interface like I2C, SPI, SSC, and ESSI(Enhanced Synchronous Serial Interface)
 Universal Serial Bus (USB)
 Multi Media Cards (SD Cards, Compact Flash, etc.)
 Networks like Ethernet, LonWorks, etc.
 Timers like PLL(s), Capture/Compare and Time Processing Units.
 Discrete IO aka General Purpose Input/Output (GPIO)
 Analog to Digital/Digital to Analog (ADC/DAC) , … and others

17 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Microprocessors in Embedded system: Processor is the heart of an embedded system. It is the basic unit that
takes inputs and produces an output after processing the data. For an embedded system designer, it is necessary
to have the knowledge of both microprocessors and microcontrollers.

In generally, 1) Processors in a System - has two essential units: Program Flow Control Unit (CU) and
Execution Unit (EU).

The CU includes a fetch unit for fetching instructions from the memory. The EU has circuits that implement the
instructions pertaining to data transfer operation and data conversion from one form to another.

The EU includes the Arithmetic and Logical Unit (ALU) and also the circuits that execute instructions for a
program control task such as interrupt, or jump to another set of instructions.

A processor runs the cycles of fetch and executes the instructions in the same sequence as they are fetched from
memory. 2) Types of Processors - can be of the following categories:

 General Purpose Processor (GPP)


o Microprocessor
o Microcontroller
o Embedded Processor
o Digital Signal Processor
o Media Processor
 Application Specific System Processor (ASSP)
 Application Specific Instruction Processors (ASIPs)
 GPP core(s) or ASIP core(s) on either an Application Specific Integrated Circuit (ASIC) or a Very Large
Scale Integration (VLSI) circuit.
Microprocessor

A microprocessor is a single VLSI chip having a CPU. In addition, it may also have other units such as coaches,
floating point processing arithmetic unit, and pipelining units that help in faster processing of instructions.

Earlier generation microprocessors‟ fetch-and-execute cycle was guided by a clock frequency of order of ~1
MHz. Processors now operate at a clock frequency of 2GHz

Figure 2.2: A simple block diagram of a microprocessor

Microcontroller
18 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
A microcontroller is a single-chip VLSI unit (also called microcomputer) which, although having limited
computational capabilities, possesses enhanced input/output capability and a number of on-chip functional units.
Microcontrollers may be called computers on-chip. A combination of a controller, internal ROM, RAM, parallel
and serial ports is a Microcontroller. Microcontrollers are dedicated devices embedded within an application.
Ex:- as an engine controller in automobiles, as an exposure and focus controller in cameras. See figure below.

AVR and ARM comes under the family of micro-controller. But ARM can be used as both Microcontroller or as
Microprocessor. ARM micro-controller and AVR micro-controller differs from each other in terms of different
architecture and different sets of instruction, speed, cast, Memory, Power Consumption, Bus, Width, … etc. Now
let‟s understand in detail how they differ from each other.

1. AVR micro-controller: is manufactured by Atmel Corporation in the year 1996. It is based on RISC
Instruction set Architecture (ISA) and also called as Advanced Virtual RISC. AT90S8515 was the initial micro-
controller belongs to AVR family. AVR micro-controller is most popular category of controller and it is cheap. It
is used in many robotic applications.

2. ARM micro-controller: was introduced by Acron computer organization and is manufactured by Apple,
Nvidia, Qualcomm, Motorola, ST Microelectronics, Samsung Electronics, and TI etc. It is based on RISC
Instruction set Architecture (ISA) and also called as Advanced RISC Micro-controller. It is the most popular
micro controller and most industries use it for embedded systems as it provides a large set of features and is good
to produce devices with excellent appearances.

Detail difference between AVR and ARM:


No. AVR ARM
AVR micro controller refers to Advanced ARM micro controller refers to Advanced RISC Micro-
1. Virtual RISC (AVR). controller (ARM).
2. It has bus width of 8 bit or 32 bit. It has bus width of 32 bit and also available in 64 bit.
It uses ART, USART, SPI, I2C communication It uses SPI, CAN, Ethernet, I2S, DSP, SAI, UART,
3. protocol. USART communication protocol.
4. Its speed is 1 clock per instruction cycle. Its speed is also 1 clock per instruction cycle.
Its manufacturer is Apple, Nvidia, Qualcomm,
5. Its manufacturer is Atmel company. Samsung Electronics and TI etc.
6. It uses Flash, SRAM, EEPROM memory. It uses Flash, SDRAM, EEPROM memory.
Its family includes Tiny, Atmega, Xmega,
7 special purpose AVR. Its family includes ARMv4, 5, 6, 7 and series.
8. It is cheap and effective. It provides high speed operation.
Popular micro-controllers include Atmega8, 16, Popular micro-controllers include LPC2148, ARM
9. 32, Arduino Community. Cortex-M0 to ARM Cortex-M7, etc.

2.2. ARM Cortex M0+ Hardware Overview


19 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
ARM Cortex-M0+ is a low cost and low power consuming processor, which lays foundation for Internet of
Things. CAMBRIDGE, UK – March 13, 2012 announced the ARM® Cortex™-M0+ processor, the world's most
energy-efficient microprocessor. The Cortex-M0+ processor has been optimized to deliver ultra-low-power, low-
cost MCUs for intelligent sensors and smart control systems including home appliances, medical monitoring,
metering, lighting and power and motor control devices, … etc. The Cortex-M0+ processor is 32-bit, the latest
addition to the ARM Cortex processor family, consumes just 9µA/MHz on a low-cost 90nm low power process,
around one third of the energy of any 8- or 16-bit processor available today, while delivering significantly higher
performance.
This industry-leading combination of low power and high performance provides users of legacy 8- and 16-bit
architectures with an ideal opportunity to migrate to 32-bit devices, thereby delivering increased intelligence to
everyday devices, without sacrificing power consumption or area.
The Cortex-M0+ processor features enable the creation of smart, low-power, microcontrollers to provide
efficient communication, management and maintenance across a multitude of wirelessly connected devices, a
concept known as the 'Internet of Things'.
"The Internet of Things will change the world as we know it, improving energy efficiency, safety, and
convenience," said Tom R. Halfhill, a senior analyst with the Linley Group and senior editor of Microprocessor
Report. Ubiquitous network connectivity is useful for almost everything - from adaptive room lighting and
online video gaming to smart sensors and motor control. But it requires extremely low-cost, low-power
processors that still can deliver good performance. The ARM Cortex-M0+ processor bring 32-bit horsepower to
flyweight chips, and it will be suitable for a broad range of industrial and consumer applications.
The new processor builds on the successful low-power and silicon-proven Cortex-M0 processor which has been
licensed more than 50 times by leading silicon vendors as NXP, and has been redesigned from the ground up to
add a number of significant new features. These include single-cycle IO to speed access to GPIO and
peripherals, improved debug and trace capability and a 2-stage pipeline to reduce the number of cycles per
instruction (CPI) and improve Flash accesses, further reducing power consumption.
The Cortex-M0+ processor take advantage of the same easy-to-use, C friendly programmer's model, and is
binary compatible with existing Cortex-M0 processor tools and RTOS. Along with all Cortex-M series
processors it enjoys full support from the ARM Cortex-M ecosystem and software compatibility enables simple
migration to the higher-performance Cortex-M3 and Cortex-M4 processors.
Early licensees of the Cortex-M0+ processor include Freescale and NXP Semiconductor;
"We're excited to further strengthen our relationship with ARM as a lead partner in the definition, and first
licensee of the smallest, lowest-power ARM Cortex-M series processor yet," said Dr. Reza Kazerounian, senior
vice president and general manager of Freescale's Automotive, Industrial & Multi-Market Solutions group. "The
addition of products built on the Cortex M0+ processor will make our fast-growing Kinetis MCU line one of the
industry's most scalable portfolios based on the ARM Cortex architecture. With the ability to reuse code, higher
20 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
performance and improved energy efficiency, the Cortex M0+ processor will enable designers to transition from
legacy 8-bit and 16-bit proprietary architectures to our new Kinetis devices, without sacrificing cost and ease of
use benefits."
"NXP is the only MCU vendor to have adopted the complete ARM Cortex-M processor series, and we're excited
to be able to add the Cortex-M0+ processor to our portfolio," said Alexander Everke, Executive Vice President
and General Manager of High-Performance Mixed-Signal businesses, NXP Semiconductors. "We have already
proven the success of our Cortex-M0 processor portfolio with over 70 part types shipping in high volume today,
this new Cortex-M0+ processor further accelerates our momentum into the 8/16-bit market."
"The Cortex-M0+ processor is yet another demonstration of ARM low power leadership and its commitment to
drive the industry forward towards ever lower power consumption," said Mike Inglis, EVP and GM, Processor
Division, ARM. "With our expertise in low-power technology, we have worked closely with our Partners on the
definition of the new processor to ensure that it can enable the low-cost devices of today, while also unlocking
the potential benefits delivered by the Internet of Things."

 The Arm® Cortex®-M0+ is the most energy-efficient Arm® processor available for embedded
applications with design constraints. It features one of the smallest silicon footprint and minimal code
size to allow developers to achieve 32-bit performance at 16 and 8-bit price points. The low gate count of
the processor enables deployment in applications where simple functions are required.
 The Cortex®-M0+ brings additional features to the Arm® Cortex®-M0, as well as performance
improvements in the CPU (2.46 CoreMark®/MHz compared to 2.33 CoreMark® for the M0 core).
 Cortex®-M0+ integrates a Memory Protection Unit (MPU), a fast and single cycle I/O interface and a
Micro Trace Buffer (MTB) in addition to features exist in Arm® Cortex®-M0.

Figure 2. 3 Comparative view of features in ARM Cortex -M0 with -M0+ series

2.

3. Ports, Registers, GPIO, Analog I/O, ADC/DAC in ARM Cortex-M0+ hardware


21 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Ports – are gates from the central processing unit to internal and external hard- and software components. A port
is the point where internal data from MCU chip comes out or external data goes in. They are present in form of
PINs of the IC. Most of the PINs are dedicated to this function and other pins are used for power supply, clock
source etc. The CPU communicates with these components, reads from them or writes to them, e. g. to the timers
or the parallel ports. Ports have a fixed address, over which the CPU communicates. ARM Cortex-M0+
processor (revision r0p1), running at frequencies of up to 30 MHz with single-cycle multiplier and fast single-
cycle I/O port.

Registers – are a type of computer memory used to quickly accept, store, and transfer data and instructions that
are being used immediately by the CPU. A processor register may hold an instruction, a storage address, or any
data (such as bit sequence or individual characters). ARM processors provide general-purpose and special-
purpose registers. Some additional registers are available in privileged execution modes. In general purpose
register, the fast-access Register file contains 32 x 8-bit general purpose working registers with a single clock
cycle access time. This allows single-cycle Arithmetic Logic Unit (ALU) operation. Six of the 32 registers can
be used as three 16-bit indirect addresses register pointers for Data Space addressing –enabling efficient address
calculations. One of these address pointers can also be used as an address pointer for look up tables in Flash
Program memory.

In all ARM processors, the following registers are available and accessible in any processor mode:
13 general-purpose register R0-R12.
  One Program Counter (PC).
One Stack Pointer (SP).
  One Application Program Status Register
 One Link Register (LR). (APSR).
Understanding GPIO
4. General purpose input/output (GPIO) is the name of a microcontroller peripheral that provides
functionality to source many signals at once (that is, in parallel). That are designed to be very flexible, so
configuring them can be rather confusing but using the RTE manager makes this process much simpler.

High-speed GPIO interface connected to the ARM Cortex-M0+ IO bus with up to 29 General-Purpose I/O
(GPIO) pins with configurable pull-up/pull-down resistors, programmable open-drain mode, input inverter, and
digital filter. GPIO direction control supports independent set/clear/toggle of individual bits. E.g. a function
“helloBlinky_c1v0” Making an LED blink involves connecting it to a signal that alternately switches ON and
OFF. We will modify our helloBlinky_c1v0 recipe to simultaneously make all the LEDs blink rather than just
one. Each LED on the evaluation board is connected to a pin on the microcontroller, so to illuminate an LED the
microcontroller needs to provide a voltage and current similar to the that of a torch battery. To source this
current, the corresponding GPIO port bit connected to the pin must be configured as an output that is switched
ON and OFF by statements in our program that write to the port output data register.
22 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science

Sample code

* Name: helloBlinky.c

* Purpose: Simultaneous MCBSTM32F400 LED Flasher

*------------------------------------------*/

#include "stm32F4xx_hal.h" LED_SetOut (On_Code); /* Turn LEDs on */-

#include "Board_LED.h" for (i = 0; i< 1000000; i++)

int main (void) { /* empty statement */ ; /* Wait */

const unsigned intOff_Code = 0x0000; LED_SetOut (Off_Code); /* Turn LEDs off */

const unsigned intOn_Code = 0x00FF; for (i = 0; i< 1000000; i++)

unsigned inti; /* empty statement */ ; /* Wait */

LED_Initialize(); /* LED Init */ } /* end for */

for (;;) { /* Loop forever */ }

The GPIO interface is a particularly important feature in microcontroller because it is designed to be easily
integrated within user systems to drive light emitting diodes, read the state of switches, or connect to other
peripheral interface circuits. Early I/O ports were prewired to provide either output or input interfaces, but soon
they evolved into general purpose interfaces that could be programmed to provide either output or input
connections. Later devices included more programmable features. As GPIO is so important for microcontroller
applications, designers are strong to specify as many I/O pins as possible on their devices. However, increasing
the device pin-out adds cost because the device becomes physically larger to accommodate the pins. This
motivates manufacturers to develop devices that have pins that are configured by software. As you can imagine,
configuring such a device is quite a challenge, so we're lucky that Keil's developers have provided library
functions that make this task more manageable.

GPIO ports also provide an I/O path for other peripheral functions, such as Times and Digital-to-Analogue
converters.
Analog I/O: Microcontrollers are often required to interface with analog signals. They must be able to convert
input analog signals, for example from microphone or temperature sensor, to digital data. They must also be able
to convert digital signals to analog form, for example if driving a loudspeaker or dc motor.
ADC/DAC

23 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
An analog-to-digital convertor (ADC) is an electronic circuit whose digital output is proportional to its analog
input. Effectively it "measures" the input voltage, and gives a binary output number proportional to its size. The
input range of the ADC is usually determined by the value of a voltage reference.
The conversion is started by a digital input, called here SC, takes finite time, and the ADC signals with the EOC
line when the conversion is complete. The resulting data can be enabled onto a data bus using the OE line.

Figure 2. 4 a simulation diagram for Analog to Digital Conversion


We can represent the digital-to-analog convertor (DAC) as a block diagram with a digital input, D, and an analog
output, Vo. The output range of the DAC, Vr, is the difference between the maximum and minimum output
voltages, i.e. Vr = Vmax. - Vmin.

The particular output range is usually defined by a fixed voltage reference supplied to the DAC, Digital control
lines allow a microcontroller to setup and communicate with the DAC.

Figure 2. 5 a simulation diagram for Analog to Digital Conversion

24 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
2.3. Communication in embedded system: Parallel, USB/Serial, USART, SPI, TWI, Ethernet
i. Parallel Communication
Parallel communication is the process of sending/receiving multiple data bits at a time through parallel ports. A
parallel port is an interface for a computer to communicate peripherals in parallel manner. Data are transferred in
or out in parallel, that is, on more than one wire. A parallel port carries 1 bit on each wire, thus multiplying the
transfer rate obtainable over a single wire. There will usually be some control signals on the port that indicate
when data are ready to be sent or received. The most common type of parallel port is a printer port (e.g., a
Centronics port that transfers 8 bits at a time).

Figure 2.6 Simulation of Parallel Communication ports (Centronics)


In parallel communication, all the bits of data are transmitted simultaneously on separate communication lines. It
is used for shorter distance. In order to transmit n bit, n wires or lines are used and in this case it is implemented
more costly but, faster than serial transmission, data can be transmitted in less time. Parallel, all bits in a word
transmitted at the same times communicate at different wires

ii. USB/Serial Communication


This is the most popular of all and used for virtually all type of connections. The bus has 4 lines: VCC, Ground,
Data+, and Data-. Universal Serial Bus (USB) is a set of interface specifications for high speed wired
communication between electronics systems peripherals and devices with or without PC/computer. The USB

25 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
was originally developed in 1995 by many of the industry leading companies like Intel, Compaq, Microsoft,
Digital, IBM, and Northern Telecom.
The major goal of USB was to define an external expansion bus to add peripherals to a PC in easy and simple
manner. USB offers users simple connectivity. It eliminates the mix of different connectors for different devices
like printers, keyboards, mice, and other peripherals. That means USB-bus allows many peripherals to be
connected using a single standardized interface socket. It supports all kinds of data, from slow mouse inputs to
digitized audio and compressed video.
USB sends data in serial mode i.e. the parallel data is serialized before sends and de-serialized after receiving.
The benefits of USB are low cost, expandability, auto-configuration, hot-plugging and outstanding performance.
It also provides power to the bus, enabling many peripherals to operate without the added need for an AC power
adapter.
Various versions of USB:
USB1.0: - is the original release of USB having the capability of transferring 12Mbps, supporting up to 127
devices. This USB 1.0 specification model was introduced in January 1996.
USB1.1: - came out in September 1998. USB 1.1 is also known as full-speed USB. This version is similar to the
original release of USB; however, there are minor modifications for the hardware and the specifications. USB
version 1.1 supported two speeds, a full speed mode of 12Mbits/s and a low speed mode of 1.5Mbits/s.
USB2.0: - also known as hi-speed USB. This hi-speed USB is capable of supporting a transfer rate of up to 480
Mbps, compared to 12 Mbps of USB 1.1. That's about 40 times as fast! Wow!
USB3.0: - It is also called as Super-Speed USB having a data transfer rate of 5Gbps. That means it can deliver
over 10x the speed of today's Hi-Speed USB connections.
USB3.1: - It is also called as Super-Speed USB+ having a data transfer rate of 10Gbps.
All USB data is sent serially. USB data transfer is essentially in the form of packets of data, sent back and forth
between the host and peripheral devices. Initially all packets are sent from the host, via the root hub and possibly
more hubs, to devices. In serial communication, the data bits are transmitted serially one by one i.e. bit by bit on
single communication line. It requires only one communication line rather than n lines to transmit data from
sender to receiver. Thus all the bits of data are transmitted on single lines in serial fashion with the less costly at
long distance transmission. Example: Telephone.
Data transfer rate in serial communication is measured in terms of bits per second (bps). This is also called as
Baud Rate. Baud Rate and bps can be used inter changeably with respect to UART. Example: The total number
of bits gets transferred during 10 pages of text, each with 100 × 25 characters with 8 bits per character and 1 start
& stop bit is: For each character a total number of bits are 10. The total number of bits is: 100 × 25 × 10 =
25,000 bits per page. For 10 pages of data it is required to transmit 250,000 bits. Generally, baud rates of SCI are

26 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
1200, 2400, 4800, 9600, 19,200 etc. To transfer 250,000 bits at a baud rate of 9600, we need: 250000/9600 =
26.04 seconds (27 seconds).
iii. USART Communication
Universal Synchronous Asynchronous Receiver Transmitter (USART) is a kind of serial communication
protocols categorized as Synchronous and Asynchronous protocols.
In synchronous communication, transmission and receiving of data is a continuous stream at a constant rate.
Synchronous communication requires the clock of transmitting device and receiving device synchronized. In
most of the systems, like ADC, audio codes, potentiometers, transmission and reception of data occurs with
same frequency. Examples of synchronous communication are: I2C, SPI etc.
In the case of asynchronous communication, the transmission of data requires no clock signal and data transfer
occurs intermittently rather than steady stream. Handshake signals between the transmitter and receiver are
important in asynchronous communications. Examples of asynchronous communication are Universal
Asynchronous Receiver Transmitter (UART), USB, CAN (Controller Area Network), etc.
Synchronous and asynchronous communication protocols are well-defined standards and can be implemented in
either hardware or software. In the early days of embedded systems, Software implementation of I2C and SPI
was common as well as a tedious work and used to take long programs. Gradually, most the microcontrollers
started incorporating the standard communication protocols as hardware cores. This development in early 90‟s
made job of the embedded software development easy for communication protocols.
Microcontroller of our interest TM4C123 supports UART, CAN, SPI, I2C and USB protocols. The five (UART,
CAN, SPI, I2C and USB) above mentioned communication protocols are available in most of the modern day
microcontrollers.
iv. SPI Communication
Serial Peripheral Interface (SPI) communication protocol is a common communication protocol used by many
different devices. For example, SD card modules, RFID card reader modules, and 2.4 GHz wireless
transmitter/receivers all use SPI to communicate with microcontrollers. One unique benefit of SPI is the fact that
data can be transferred without interruption. Any number of bits can be sent or received in a continuous stream.
With I2C and UART, data is sent in packets, limited to a specific number of bits. Start and stop conditions define
the beginning and end of each packet, so the data is interrupted during transmission. Devices communicating via
SPI are in a master-slave relationship. The master is the controlling device (usually a microcontroller), while the
slave (usually a sensor, display, or memory chip) takes instruction from the master. The simplest configuration
of SPI is a single master, single slave system, but one master can control more than one slave.

27 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
SPI is a three-wire based communication system. One wire each for Master to slave and Vice-versa, and one for
clock pulses. There is an additional SS (Slave Select) line, which is mostly used when we want to send/receive
data between multiple ICs.
MOSI (Master Output/Slave Input) – Line for the
master to send data to the slave.
MISO (Master Input/Slave Output) – Line for the slave
to send data to the master.
SCLK (Clock) – Line for the clock signal.
SS/CS (Slave Select/Chip Select) – Line for the master
to select which slave to send data to.
Figure 2. 7 Simulation of SPI communication

The Serial Peripheral Interface (SPI) is a synchronous interface which allows several SPI microcontrollers to be
interconnected. In SPI, separate wires are required for data and clock line. Also the clock is not included in the
data stream and must be furnished as a separate signal. The SPI may be configured either as master or as a slave.
The four basic SPI signals (MISO, MOSI, SCK and SS as seen above), Vcc and Ground are the part of data
communication. So it needs 6 wires to send and receive data from slave or master. Theoretically, the SPI can
have unlimited number of slaves. The data communication is configured in SPI registers. The SPI can deliver up
to 10Mbps of speed and is ideal for high speed data communication.

Figure 2.8 SPI Communication

v. TWI Communication
The I2C bus has two wires – one for clock, and the other is the data line, which is bi-directional – this being the
reason it is also sometimes (not always – there are a few conditions) called Two Wire Interface (TWI). It is a
pretty new and revolutionary technology invented by Philips.
vi. Ethernet Communication: Remote monitoring, control, diagnostics, data collection, and data sharing are all
benefits from using Ethernet in embedded applications. Ethernet has become the networking technology of

28 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
choice in a wide range of applications; most modern embedded computing products feature at least one Ethernet
interface. A mature and open standard, Ethernet is implemented in hardware from many vendors, and features
extensive software support.
One area that has been reluctant to adopt Ethernet is real-time systems. In these applications, processing must
keep up with events in the real world – unpredictable delays in computing systems cannot be tolerated. In safety-
critical real-time systems – where errors can lead to a loss of life – the requirement for deterministic real-time
behavior becomes even more essential.
Early Ethernet systems did not deliver the deterministic forwarding required for real-time systems. Various
conditions could cause network traffic to be delayed or even dropped. However modern Ethernet technology has
reduced or even eliminated many of the sources of unpredictable communications. Switched Ethernet networks
configured for Quality of Service (QoS) can deliver predictable, low-latency forwarding. With careful design,
Ethernet systems can meet the needs of many real-time systems. In addition, several new standards extend
Ethernet to enable deterministic Ethernet for the most demanding systems including safety-certified avionics
applications.
2.4. ATmega32 microcontroller Architecture
The Atmel®AVR®ATmega32 is a low-power CMOS 8-bit microcontroller based on the AVR enhanced RISC
architecture. By executing powerful instructions in a single clock cycle, the ATmega32 achieves throughputs
approaching 1 MIPS per MHz allowing the system designed to optimize power consumption versus processing
speed.
The Atmel®AVR®AVR core combines a rich instruction set with 32 general purpose working registers. All the
32 registers are directly connected to the Arithmetic Logic Unit (ALU), allowing two independent registers to be
accessed in one single instruction executed in one clock cycle. The resulting architecture is more code efficient
while achieving throughputs up to ten times faster than conventional CISC microcontrollers.
The ATmega32 provides the following features:
 32Kbytes of In-System Programmable Flash Program memory with Read-While-Write capabilities,
 1024bytes EEPROM, 2Kbyte SRAM, 32 general purpose I/O lines & 32 general purpose registers,
 a JTAG interface for Boundary-scan, On-chip Debugging support and programming,
 three flexible Timer/Counters with compare modes, Internal and External Interrupts,
 a serial programmable USART, a byte oriented Two-wire Serial Interface, an 8-channel, 10-bit ADC with optional
differential input stage with programmable gain (TQFP package only),
 a programmable Watchdog Timer with Internal Oscillator,
 An SPI serial port and six software selectable power saving modes.
 The Idle mode stops the CPU while allowing the USART, Two-wire interface, A/D Converter, SRAM,
Timer/Counters, SPI port, and interrupt system to continue functioning.

29 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
 The Power-down mode saves the register contents but freezes the Oscillator, disabling all other chip functions until
the next External Interrupt or Hardware Reset.
 In Power-save mode, the Asynchronous Timer continues to run, allowing the user to maintain a timer base while the
rest of the device is sleeping.
 The ADC Noise Reduction mode stops the CPU and all I/O modules except Asynchronous Timer and ADC, to
minimize switching noise during ADC conversions.
 In Standby mode, the crystal/resonator Oscillator is running while the rest of the device is sleeping. This allows very
fast start-up combined with low-power consumption.
 In Extended Standby mode, both the main Oscillator and the Asynchronous Timer continue to run.
The device is manufactured using Atmel‟s high density nonvolatile memory technology. The On-chip ISP Flash
allows the program memory to be reprogrammed in-system through an SPI serial interface, by a conventional
nonvolatile memory programmer, or by an On-chip Boot program running on the AVR core. The boot program
can use any interface to download the application program in the Application Flash memory. Software in the
Boot Flash section will continue to run while the Application Flash section is updated, providing true Read-
While-Write operation. By combining an 8-bit RISC CPU with In-System Self-Programmable Flash on a
monolithic chip, the Atmel ATmega32 is a powerful microcontroller that provides a highly-flexible and cost-
effective solution to many embedded control applications.
2. GND: Ground.
3. Port A (PA7...PA0): Port A serves as the analog
inputs to the A/D Converter, and as an 8-bit bi-
directional I/O port, if the A/D Converter is not used.
Port pins can provide internal pull-up resistors (selected
for each bit).
The Port A output buffers have symmetrical drive
characteristics with both high sink and source
capability. When pins PA0 to PA7 are used as inputs
and are externally pulled low, they will source current if
the internal pull-up resistors are activated. The Port A
pins are tri-stated when a reset condition becomes
active, even if the clock is not running.
Figure 2.9 Pinout ATmega32
[... jumped to the next of block diagram below….]
Pin Descriptions
1. VCC: Digital supply voltage.

30 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science

32KB

2KB

1KB

Figure 2.10 Block Diagram (The Atmel®AVR®ATmega32)


4. Port B (PB7…PB0): Port B is an 8-bit bi-directional I/O port with internal pull-up resistors (selected for each
bit). The Port B output buffers have symmetrical drive characteristics with both high sink and source capability.

31 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
As inputs, Port B pins that are externally pulled low will source current if the pull-up resistors are activated. The
Port B pins are tri-stated when a reset condition becomes active, even if the clock is not running. Port B also
serves the functions of various special features of the ATmega32.

5. Port C (PC7…PC0): Port C is an 8-bit bi-directional I/O port with internal pull-up resistors (selected for
each bit). The Port C output buffers have symmetrical drive characteristics with both high sink and source
capability. As inputs, Port C pins that are externally pulled low will source current if the pull-up resistors are
activated. The Port C pins are tri-stated when a reset condition becomes active, even if the clock is not running.
If the JTAG interface is enabled, the pull-up resistors on pins PC5(TDI), PC3(TMS) and PC2(TCK) will be
activated even if a reset occurs.

The TD0 pin is tri-stated unless TAP states that shift out data are entered. Port C also serves the functions of the
JTAG interface and other special features of the ATmega32.

6. Port D (PD7...PD0): Port D is an 8-bit bi-directional I/O port with internal pull-up resistors (selected for each
bit). The Port D output buffers have symmetrical drive characteristics with both high sink and source capability.
As inputs, Port D pins that are externally pulled low will source current if the pull-up resistors are activated. The
Port D pins are tri-stated when a reset condition becomes active, even if the clock is not running. Port D also
serves the functions of various special features of the ATmega32.

7. RESET: Resets input. A low level on this pin for longer than the minimum pulse length will generate a reset,
even if the clock is not running. Shorter pulses are not guaranteed to generate a reset.

8. XTAL1: input to the inverting Oscillator amplifier and input to the internal clock operating circuit.

9. XTAL2: output from the inverting Oscillator amplifier.

10. AVCC: is the supply voltage pin for Port A and the A/D Converter. It should be externally connected to
VCC, even if the ADC is not used. If the ADC is used, it should be connected to VCC through a low-pass filter.

11. AREF: is the analog reference pin for the A/D Converter.

2.4.1. ATMega32 Programmer Model: Memory,


1. 2KB SRAM: used for temporary data storage, volatile memory - lost when power is shut; fast read/write.

2. 1KB EEPROM: used for persistent data storage, non-volatile memory contents are retained when power is
off (non-volatile). It is also fast read but slow write, can write individual bytes.

3. 32KB Flash Program Memory: used to store program code, non-volatile memory, contents retained when
power is off (non-volatile); fast to read; slow to write, can only write entire “blocks” of memory at a time. It
organized in 16-bit words (16KWords).

2.5. Assembly language Programming with ATmega32 Instruction Set


To code in assembler, one should have some idea about the architecture of the target hardware. It is enough to
assume that the AVR micro-controller appears to the programmer as a set of General Purpose Registers (GPRs:

32 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
R1 to R31), Special Functions Registers (SFRs) that controls the peripherals, some data memory (2kbytes of
SRAM for Atmega32). All of them are in the data address space. We also have Program memory and EEPROM
in different address spaces.
Assembly language programming involves moving data between GPRs, SFRs and RAM, and performing
arithmetic and logical operations on the data.
General Purpose Registers
We are already familiar with the Special Function Registers (including DDRB and PORTB) that were used to
configure and control various features of the microcontroller. In addition to these ATmega32 has 32 general
purpose registers (32 here is a coincidence. The 32 in ATmega32 refers to 32 kB of flash memory available for
programming. All AVR micro-controllers, even ATmega8 or ATmega16 have 32 GPRs).
Any numerical value that needs to be used in the program needs to be first loaded into one of the GPRs. So, if
you want to load 0xff into DDRB, you first need to load 0xff into a GPR and then copy the content of the GPR
into DDRB. This might seem like an unnecessary restriction to us who have been used to writing DDRB=0xff in
C, but it is a necessary consequence of the streamlined hardware design of the processor which C hides from us.
Even though the 32 registers R0-31 are called "general purpose", there are special uses for some of them, which
will be discussed later.

Figure 2.11 Execution of "LDS R0, 0x300" and "LDS R1, 0x302" instructions

Instructions
What we could do intuitively with an assignment operator (=) in C requires the use of more than one instruction.
LDI (Load Immediate) : used to load a constant value to one of the registers R16-31 ( that's a restriction. load
immediate can't work with R1 ro R15)

33 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
OUT (output to any Special Function Register): The SFRs are mapped to the locations 0 to 3Fhex. 0x17 and
0x18 are the I/O mapped addresses of the registers DDRB and PORTB respectively.
The SFRs are also mapped into the memory space to locations 20hex to 5Fhex. Due to this reason you can use
STS (Store Direct to SRAM) instruction instead of OUT but to a different address. OUT 0x17, R16 and STS
0x37, R16 achieves the same result but the former is compact.
Adding two numbers
The code listed below (add.S) adds two numbers and displays the result on the LEDs connected to port B.
Instead of remembering the addresses of DDRB and PORTB, we have included the file 'avr/io.h' that contains all
the Register names. Naming the program with .S (capital S instead of small s ) invokes the pre-processor, that
also allows expressions like (1 << PB3) to be used. (add.S)
#include <avr/io.h>
.section .text ; denotes code section
.global main
main:
LDI R16, 255 ; load R16 with 255
STS DDRB, R16 ; set all bits of port B as output
LDI R16, 2 ; load R16 with 2
LDI R17, 4 ; load R17 with 4
ADD R16, r17 ; R16 <- R16 + R17
STS PORTB, R16 ; result to port B
.END
Running this program lights LEDs D2 and D3
The Status Register (SREG)
Arithmetic and logical operations will affect the status flag bits like Carry, Zero, Negative etc. Refer to
Atmega32 databook for complete information through online.

Bit 0: Carry Bit 3: Two's complement overflow


Bit 1: Zero Bit 4: Sign bit, exclusive OR of N and V
Bit 2: Negative Bit 5: Half Carry
Let us modify the previous program to evaluate 255 + 1. The result will be shown on port B and the status flag
register SREG on port A. (carry.S).

34 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 2 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
#include <avr/io.h>
.section .text ; denotes code section
.global main
main:
LDI R16, 255
STS DDRB, R16 ; All bits of port B as output
STS DDRA, R16 ; All bits of port A as output
LDI R17, 1 ; load R17 with 1, R16 already has 255
ADD R16, R17 ; R16 <- R16 + r17
STS PORTB, R16 ; Sum to port B
LDS R16, SREG ; Load the Status register
STS PORTA, R16 ; display it on port A
.END
The Carry, Zero and Half Carry bits will be set on port B.
Moving Data
To manipulate data, we need to bring them into the GPRs (R1 to R31) and the results should be put back into the
memory locations. There are different modes of transferring data between the GPRs and the memory locations,
as explained below.
Register Direct: MOV R1, R2 ; copies R2 to R1 . Two GPRs are involved in this. There are also operations that
involves a single register like, INC R1
I/O Direct: For moving data between the GPRs and the SFRs, since the SFRs can be accessed as I/O addresses.
OUT 0x17, R1 copies R1 to DDRB. Please note that the I/O address is 20hex less than the memory mapped
address (0x37) of the same SFR. (io-direct.S)
Immediate: This mode can be used for transferring a number to any register from R16 to R31, like : LDI R17,
200. The data is provided as a part of the instruction. (immed.S)
Data Direct: In this mode, the address of the memory location containing the data is specified, instead of the
data itself. LDS R1, 0x60 moves the content of memory location 0x60 to R1. STS 0x61, R1 copies R1 to
location 0x61. (data-direct.s)
Data Indirect: In the previous mode, the address of the memory location is part of the instruction word. Here the
address of the memory location is taken from the contents of the X, Y or Z registers. X, Y and Z are 16 bit
registers made by combining two 8 bit registers (X is R26 and R27; Y is R28 and R29; Z is R30 and R31. This is
required for addressing memory above 255. (data-indirect.s)
LDI R26, 0x60 ; address of location 0x0060 to X
LDI R27, 0x00
LD R16, X; load R16 with the content of memory location pointed to by X
This mode has several variations like pre and post incrementing of the register or adding an offset to it.

35 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 3 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Programs having Data:
Programs generally have variables, sometimes with initialized data. The following example shows how to access
a data variable using direct and indirect modes. (data-direct-var.S)
#include <avr/io.h>
.section .data ; data section starts here
var1: .byte 0xEE ; initialized global variable var1
.section .text ; code section
.global __do_copy_data ; initialize global variables
.global __do_clear_bss ; and setup stack pointer
.global main
main:
LDS R1, var1 ; load R1 using data direct mode
STS DDRA, R1 ; display R1 on port A
STS PORTA, R1
LDI R26, lo8(var1) ; load the lower and
LDI R27, hi8(var1) ; higher bytes of the address of var1 to X
LD R16, X ; Load R16 using data-indirect mode, data from where X is pointing to
STS DDRB, R16 ; display R16 on port B
STS PORTB, R16
.END
The lines .global __do_copy_data and .global __do_clear_bss tells the assembler to insert code for initializing
the global variables, which is a must.
Jumps and Calls:
The programs written so far has an execution flow from the beginning to the end, without any branching or
subroutine calls, generally required
in all practical programs. The execution flow can be controlled by CALL and JMP instructions. (call-jump.S)
#include <avr/io.h>
.section .text ; code section starts
disp: ; our subroutine
STS PORTB, R1 ; display R1 on port B
INC R1 ; increments it
RET ; and return
.global main
main:
LDI, R16, 255
STS DDRB, R16
MOV R1, R16
loop:
RCALL disp ; relative call
CALL disp ; direct call
RJMP loop
.end
The main program calls the subroutine in a loop, the data is incremented on each call. Use an oscilloscope to
view the voltage waveform at each LEDs.
The AVR allows direct access to other location in the data memory.

36 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 3 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science

37 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 3 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
2.6. Programming in C to Interface I/O, Interrupts, ISR and Timers
I/O programming in C
All port registers of the AVR are both byte accessible and bit accessible. Here we look at both byte and bit I/O
programming.
Byte I/O: to access a PORT register as byte, we use the PORTx label where x indicates the name of the port. We
access the data direction register in the same way, using DDRx to indicate the data direction of port x. To access
a PIN register as a byte, we use the PINx label where x indicates the name of the port. See example below:
Example: Write an AVR C program to get a byte of data from Port C. If is less than 100, send it to Port B;
otherwise, send it to Port D.
Solution:

Bit size I/O: The I/O ports of ATmega32 are bit-accessible. But some AVR C compilers do not support theis
feature, and the others do not have a standard way of using it. For Example: the code line below can be in
CodeVisiom to set the first pin of Port B to one: PORTB.0 = 1; but it cannot be used in other compilers such
as WinAVR. To write portable code that can be compiled on different compilers, wemust use AND and OR bit-
wise operations to access a single bit of a given register. So you can access a single bit without disturbing the
rest of the byte.
Example: A door sensor is connected to bit 1 of Port B, and an LED is connected to bit 7 of Port C. Write an
AVR C program to monitor the door sensor and, when it opens, turn on the LED.
Solution:

38 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 3 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science

Interrupts: An interrupt is a signal (an “interrupt request”) generated by some event external to the CPU,
causes CPU to stop currently executing code and jump to separate piece of code to deal with the event. Interrupt
handling code often called an ISR (“Interrupt Service Routine”). When ISR is finished, execution returns to code
running prior to interrupt.
The sequence of an interrupt is: Interrupt Event, Interrupt Request and Interrupt Service Routine. Depending on
platform, varying number of ISRs may be supported, typically between 1 to 16; multiple events may be mapped
to a single routine, i.e. any PORTA pin change, Interrupt events may have priorities.
ISR is implemented inside a function with no parameters and no return value (void). Typically keep interrupt
routines shorter than 15-20 lines of code.
Main sources of interrupts are Hardware Interrupts and software interrupts. Hardware Interrupts are commonly
used to interact with external devices or peripherals. Microcontroller may have peripherals on chip. Software
interrupts triggered by software commands, usually for special operating system tasks, i.e. switching between
user and kernel space, handling exceptions.
Common hardware interrupt sources are input pin change, hardware timer overflow or compare-match,
peripherals for serial communication (such as UART, SPI, I2C – Rx data ready, tx ready, tx complete), ADC
conversion complete and watchdog timer timeout.
Advantage of having interrupts is to avoid writing software code in such a way that the processor must
frequently spend time checking the status of input pins or registers.
Coding Interrupts in AVR C: in C language there is no instruction to manage the interrupts. So, in
WinAVR the following have been added to manage the interrupts:
1. Interrupt include file – we should include the interrupt header file if we want to use interrupts in our
program. We use the following instruction
#include<avr\interrupt.h>
2. cli() and sei() – In assembly, the CLI and SEI instructions clear and set the I bit of the SREG register,
respectively. In WinAVR, the cli() and sei() macros do the same tasks.
3. Defining ISR: to write an ISR (interrupt service routine) for an interrupt we use the following structure:
ISR (interrupt vector name) { //our program }

39 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 3 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
For the interrupt vector name we must use the ISR names in table given below; for example, the following ISR
serves the Timer0 compare match interrupt: ISR (TIMER0_COMP_vect) { }
See Example next to table.
Table 2.1 interrupts Vector Name for the ATmega32/ATmega16 in WinAVR

Example: Using Timer0 generate a square wave on pin PORTB.5, while at the same time transferring
data from PORTC to POTD.
Solution:

40 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 3 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science

Chapter 3: Software Frameworks for Real-time and Embedded Systems


3.1. Real-time operating system: definitions, Characteristics, functionality,
components and support for applications
Definition: Real-time operating system (RTOS) is an operating system intended to serve real time
application that process data as it comes in, mostly without buffer delay. It is an operating system (OS) that
has system software required to synchronize and schedule the tasks in a multitasking system environment in
real time and takes care of the real-time constraints in the system.

In a RTOS, Processing time requirement is calculated in tenths of second‟s increments of time. It is time
bound system that can be defined as fixed time constraints. In this type of system, processing must be done
inside the specified constraints. Otherwise, the system will fail.

Real–time operating system is a type of operating system in the real-time system. These systems are
characterized by having time as a key parameter. For example, in industrial process-control systems, real-time
computers have to collect data about the production process and use it to control machines in the factory.
Often there are hard deadlines that must be met. For example, if a car is moving down an assembly line,
certain actions must take place at certain instants of time. If, for example, a welding robot welds too early or
too late, the car will be ruined. If the action absolutely must occur at a certain moment (or within a certain
range), we have a hard real-time system. Many of these are found in industrial process control, avionics, and
military and similar application areas. These systems must provide absolute guarantees that a certain action
will occur by a certain time.
Therefore, a real-time operating system (RTOS) must be reliable; it must be fast and responsive, manage
limited resources and schedule tasks so they complete on time, and ensure functions are isolated and free of
interference from other functions.
Example of RTOS is eCos (Embedded Configurable Operating System), which is a free and open-source real-
time operating system intended for embedded systems and applications which need only one process with
multiple threads. It is designed to be customizable to precise application requirements of run-time
performance and hardware needs.

Characteristics of RTOS: The following are major characteristics of RTOS


 It has better reliability; it means, RTOS can work in long time without any human interference.
 Its result is more predictable because its every action is executed into predefined time frame.

41 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 3 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
 Its performance is much better because it can perform more complex tasks without taking more
workload.
 All software and hardware are small size, which are used in the RTOS (Real Time Operating system).
Due to this, technician does not get more headaches for finding the errors in the RTOS.
 Affordable Cost.
 It has good stability. So due to this feature, we can upgrade or downgrade to RTOS.
 Consume low memory.
 RTOS has better response time for highly predictability.
 Occupy less resource.
 Its environment is more unpredictable (such as: a bit unpredictable behavior, unpredictable current,
unpredictable environment, unpredictable future, unpredictable outcome, unpredictable pattern,
unpredictable results, unpredictable weather, unpredictable wind, e.c.t), these are generally handled
with RTOS based on designed particular system.
 Kernel helps for storing the states of interrupted tasks for execution at appropriate time frame.

Functions of RTOS: The primary function of RTOS is, it provides the better management of RAM and
processor as well as it gives the access to all system resources.
 Higher Priority Scheduler: - Real Time OS contains different many priorities with range (32-256) for
executing to every task. This scheduler helps to activate such process which has high priority. If current
task is executing in CPU processors ZONE, then it go to further highest priority task, and run processes.
 System Clock Interrupt Routine for Time Frame: - System Clock Interrupt Routine helps to perform
the highly time sensitive instructions in RTOS with using system clocks. In which, it allots the time
frame for performing the specific tasks.
 Deterministic Nature: - Real Time OS provides the protection in using big length tasks such as 100 to
1000, and it determines the further highest priority task then executes them without getting delay.
 Synchronization and Messaging: - For making the communication medium in among of all tasks of
one system to other system, RTOS use the synchronization and messaging. In which, synchronize the
entire internally activities of event flag and can be sent text messages with using the mailbox, pipes and
message queues.

Components of RTOS: The following are main components in the functionality of RTOS,
 The Scheduler: This component of RTOS tells that in which order, the tasks can be executed which is
generally based on the priority.

42 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 3 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
 Symmetric Multiprocessing component (SMP): It is a number of multiple different tasks that can be
handled by the RTOS so that parallel processing can be done.
 Function Library: It is an important element of RTOS that acts as an interface that helps you to connect
kernel and application code. This application allows you to send the requests to the Kernel using a
function library so that the application can give the desired results.
 Memory Management: this element is needed in the system to allocate memory to every program, which
is the most important element of the RTOS.
 Fast dispatch latency components: It is an interval between the termination of the task that can be
identified by the OS and the actual time taken by the thread, which is in the ready queue that has started
processing.
 User-defined data objects and classes: RTOS system makes use of programming languages like C or
C++, which should be organized according to their operation.

Figure 3.1: Components of Real Time Operating System


Application area of RTOS: RTOS used in area which real time system being applied such as:

 Airlines reservation system.  Networked Multimedia Systems


 Air traffic control system.  Command Control Systems
 Systems that provide immediate updating.  Internet Telephony
 Used in any system that provides up to date  Anti-lock Brake Systems
and minute information on stock prices.  Heart Pacemaker
 Defense application systems like RADAR.

43 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 3 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science

Difference between in GPOS and RTOS: - below are important differences between them

General-Purpose Operating System (GPOS) Real-Time Operating System (RTOS)


 It used for desktop PC and laptop.  It is only applied to the embedded application.

 Process-based Scheduling.  Time-based scheduling used like round-robin scheduling.

 Interrupt latency is not considered as  Interrupt lag is minimal, which is measured in a few
important as in RTOS. microseconds.
 No priority inversion mechanism is present  The priority inversion mechanism is current. So it cannot
in the system. modify by the system.
 Kernel‟s operation may or may not be
 Kernel‟s operation can be preempted.
preempted.
 Priority inversion remains unnoticed.  No predictability guarantees

Tip1: Advantages of Real Time Operating System


 RTOS produces the more accurate result while getting maximum consumption of the all using resources, so it do not
contain the down time.
 RTOS is error free operating system.
 RTOS can be used in the embedded system equipment because it is small size in nature.
 RTOS is a more optimize operating system so this O/S can use in such product which are online all time like as refrigerator.
 RTOS has more systematically memory allocation for every parts of the operating system.
 RTOS is multitasking system.
 RTOS allows the shorter ISR (Interrupt Service Routines).
 Time allocation system is very excellent in RTOS.
 Well-designed inter task communication.
 In RTOS, every task are executed according to “Priority Based Scheduling”, it means every tasks are performed in predefined
time frame.
 Due to modular nature, RTOS allows to modular task based testing.
 RTOS coding is reusable.
 RTOS is a scalable O/S.
 Due to better Idle Processing system, RTOS is more reliable.
 RTOS has bundle of drivers.
 Well-designed Power Management
 RTOS allows the excellent protection.
 RTOS has more speed up.
 RTOS allows the MCU portability.

Tip2 Disadvantages of Real time Operating System


 Real Time OS can execute only limited task at same time frame.
 It has not capable for performing the multi-tasking and multi-threading for long time, so it can execute only fewer tasks.
 RTOS can switch only fewer tasks and it has poor thread priority.
 Hire most experienced designer for writing their algorithms because RTOS‟s algorithms are very complicated.
 Require specific device drivers and interrupts signals for giving request rapidly to interrupts.
 It is more costly because RTOS needs many resources for performing tasks.
 Need proficient programmer for writing the piece of code to RTOS and RTOS uses the Low Priority Tasks.
 In Real Time OS, use the error handling task very difficult as it uses the non-trivial piece of processor cycles.
3.2. Inter process communication
Inter-Process Communication (IPC) is a set of techniques for the exchange of data among two or more threads
in one or more processes. Processes may be running on one or more computers connected by a network.

44 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 3 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
A process is independent if it cannot be affected by the other processes executing in the system. But, a process
is cooperating if it can affect or be affected by the other processes executing in the systems any process that
shares data with other processes is a cooperating process.
Cooperating processes need inter-process communication (IPC) mechanism that will allow them to exchange
data and information. Two models of IPC:
Shared memory:
 a region of memory that is shared by cooperating processes is established,
 processes can exchange information by reading and writing data to the shared region
Message passing:
 Communication takes place by means of messages exchanged between the cooperating processes.
 It is slower than shared memory.
 Each process should be able to name the other processes.
 The producer typically uses send() system call to send messages, and the consumer uses receive()
system call to receive messages.
 These system calls can be either synchronous or asynchronous, and could either be between processes
running on a single machine, or could be done over the network to coordinate machines in a
distributed system.

Figure 3.2: message passing usage of shared memory in inter-process communication

Message Passing is useful for exchanging smaller amounts of data easier to implement for inter-computer
communication. Message passing systems are typically implemented using system calls and thus require the
kernel intervention in shared-memory systems, systems calls are required only to establish communication.

45 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 3 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Shared-memory regions and all accesses are treated as classical memory accesses and no assistance from the
kernel is required. Shared memory allows multiple processes to share virtual memory space. This is the fastest
but not necessarily the easiest way for processes to communicate with one another.
In general, one process creates or allocates the shared memory segment, the size and access permissions for the
segment are set when it is created. The process then attaches the shared segment, causing it to be mapped into its
current data space. If needed, the creating process then initializes the shared memory. Once created, and if
permissions permit, other processes can gain access to the shared memory segment and map it into their data
space. Each process accesses the shared memory relative to its attachment address while the data that these
processes are referencing is in common; each process uses different attachment address values. For each process
involved, the mapped memory appears to be no different from any other of its memory addresses.

Figure 3.3: Processes accessing shared memory


Message Passing provides a mechanism for processes to communicate and to synchronize their actions without
sharing the same address space.
If P and Q wish to communicate, they need to: establish a communication link between them exchange
messages via send/receive. Each process that wants to communicate must explicitly name the recipient or sender
of the communication:
 send (P, message) – send a message to process P
 receive(Q, message) – receive a message from process Q
Properties of communication link:
 links are established automatically
 A link is associated with exactly one pair of communicating processes
 The link may be unidirectional, but it is usually bi-directional

3.3. Real time task scheduling


46 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 3 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
In a multitasking system, there should be some mechanism in place to share the CPU among the different
tasks and to decide which process/task is to be executed at a given point of time. Determining which
task/process is to be executed at a given point of time is known as task/process scheduling.
Multitasking is the process of scheduling and switching the CPU between several tasks.

A real-time system is typically developed following a concurrent programming approach in which a system
may be divided into several parts, called tasks, and each task, which is a sequence of operations, executes in
parallel with other tasks. A task may issue an infinite number of instances called jobs during run-time.

Scheduling policies forms the guidelines for determining which task is to be executed when. The scheduling
policies are implemented in an algorithm and it is run by the kernel as a service. The kernel service/
application, which implement the scheduling algorithm, is known as „Scheduler.‟
The kernel is the part of a multitasking system responsible for the management of tasks and
communication between tasks.

The task scheduling policy can be pre-emptive, non-preemptive or cooperative; depending on the scheduling
policy, the process scheduling decision may take place when a process switches its state to:
 „Ready‟ state from „Running‟ state
 „Blocked/Wait‟ state from „Running‟ state
 „Ready‟ state from „Blocked/Wait‟ state
 „Completed‟ state

Task Scheduling - Scheduler Selection: The selection of a scheduling criteria/algorithm should consider
 CPU Utilization: The scheduling algorithm should always make the CPU utilization high. CPU
utilization is a direct measure of how much percentage of the CPU is being utilized.
 Throughput: This gives an indication of the number of processes executed per unit of time. The
throughput for a good scheduler should always be higher.
 Turnaround Time: It is the amount of time taken by a process for completing its execution. It includes
the time spent by the process for waiting for the main memory, time spent in the ready queue, time spent
on completing the I/O operations, and the time spent in execution. The turnaround time should be a
minimum for a good scheduling algorithm.
 Waiting Time: It is the amount of time spent by a process in the „Ready‟ queue waiting to get the CPU
time for execution. The waiting time should be minimal for a good scheduling algorithm.
 Response Time: It is the time elapsed between the submission of a process and the first response. For a
good scheduling algorithm, the response time should be as least as possible.

47 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 3 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Task Scheduling – Queues: The various queues maintained by OS in association with CPU scheduling are
 Job Queue: Job queue contains all the processes in the system.
 Ready Queue: Contains all the processes, which are ready for execution and waiting for CPU to get their
turn for execution. The Ready queue is empty when there is no process ready for running.
 Device Queue: Contains the set of processes, which are waiting for an I/O device.

3.4. Scheduling
Main goal of an RTOS scheduling is to meeting deadlines. For Example: If you have five homework
assignments and only one is due in an hour, you work on that one. Similarly, in RTOS the scheduler uses the
priority to know which thread of execution to run next. In RTOS, a thread of execution is called a task.
The scheduler in a Real Time Operating System (RTOS) is designed to provide a predictable (normally
described as deterministic) execution pattern. This is particularly of interest to embedded systems as embedded
systems often have real time requirements. A real time requirement is one that specifies that the embedded
system must respond to a certain event within a strictly defined time (the deadline).
The scheduler, also called the dispatcher, is the part of the kernel responsible for determining which task
will run next.
3.4.1. Cyclic scheduling
Cyclic scheduling is simple. Each task is allowed to run to completion before it hands over to the next. A task
cannot be discontinued as it runs. Cyclic scheduling required for tasks that are executing in turn, known as cyclic
executive tasks. Cyclic scheduling carries the disadvantages of sequential programming in loop but, it is an
important way to sequence tasks in a realtime system. It is static – computed offline and stored in a table.
Cyclic Schedule Table:

Table executes completely in one hyperperiod H; Then repeats H which is least common multiple of all task
periods, n quanta per hyperperiod for n a given time slice.
Cyclic executive is one of the major software architectures for embedded systems. Historically, cycli
executives dominate safety-critical systems, simplicity and predictability win. However, there are
significant drawbacks, finding a schedule might require significant offline computation.
Multiple tables can support multiple system modes; E.g., an aircraft might support takeoff, cruising, landing, and
taxing modes. Mode switches permitted only at hyperperiod boundaries otherwise, hard to meet deadlines.
Example: Consider a system with four tasks T1 = (1,4), T2 = (1.8, 5), T3 = (1, 20), T4 = (2, 20).
Fisrt we determine the size of major cycle based on LCM of each task‟s period and then select frame size.

48 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 3 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
 Major cylce is LCM(4,5,20,20) ≥ 20, Major cycle = 20.
 Frame size is selected based on three parameters: it should divide the major cycle squerly, it must be greater
than or equal to the largest task size and there must be one frame between release of one job or task and
deadline. Thus, from fisrt constraint, frame sizes can be (f≥2), but come to second constraint, 1 is invalid.
Then, the selected frame size from first and second constraint is {2,4,5,10,20}; although, by the third
constraint, f is: 2f – gcd (Ti, f) ≤ Di should hold true for each task Ti and its deadline Di. So, 2 and 4 valid,
but select the largest. Finally frame size = 4 is selected.
Then, the possible schedule is as follows:

Table starts out with: (0, T1), (1, T3), (2, T2), (3.8, I), (4, T1), …
 We divide hyperperiods into frames and so, timing is enforced only at frame boundaries. Each task is
executed as a function call and must fit within a single frame; multiple tasks may be executed in a frame.
Let Frame size is f then, number of frames per hyperperiod is F = H/f.
Selecting an appropriate frame size (f): -
- to minimize scheduling overhead and chances of inconsistency: frame size f should be larger than each
task‟s computation size, then sets the a lower bound to that of minimum task‟s computation size.
- Minimization of table size: f should squerly divide major cycle or hyperperiod and by that, allows only a
few discrete frame size.
- Satisfaction of task deadline: between the arrival of a task and its deadline, there at least one full frame
must exist, sets an upper bound.
- If there is not even a single frame, the task would be miss its deadline and by the time it could be taken
up for scheduling; the deadline could be imminent
- Minimize inconsistency: Unless a job runs to completion, its partial results might be used by other jobs,
leading to inconsistency.

Task Slices: What to be done if frame size constraints cannot be met? Answre: Take task slicing.

Example: T = { (1,4), (2.7,5), (5,20) }


 By Constraint 1: f ≥ 5
 By Constraint 3: f ≤ 4, (2*f – gcd(f, Ti) ≤ Di
 Solution: slice a task into smaller sub-tasks, So (20, 5) becomes (20, 1), (20, 3), and (20, 1).
Now by constraint 1: f ≥ 3 thus, f = 4 works.
49 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 3 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Below is Cyclic Executive Pseudocode

// L is the stored schedule


current time t = 0; --- > t++;
current frame k = 0; k = t mod F;
do forever if last task not completed, take appropriate action;
accept clock interrupt; execute slices in currentBlock;
currentBlock = L(k); sleep until next clock interrupt;
---- >
Computing a Static Schedule

Problem: Derive a frame size and schedule meeting all constraints.


Solution: Reduce to a network flow problem. Use constraints to compute all possible frame sizes.
For each possible size, try to find a schedule using network flow algorithm. If flow has a certain value:
A schedule is found and we‟re done; Otherwise:
Schedule is not found, look at the next frame size. If no frame size works, system is not schedulable using
cyclic executive.

 Read about network flow algorithm from any source.

3.4.2. Priority-based scheduling

Typical RTOS based on fixed-priority preemptive scheduler assigning each process with a priority
(numbering). At any time, scheduler runs highest priority process ready to run to completion, unless
preempted.
Most of the real time systems are framed with aid of priority based scheduling algorithms. If the task is not
scheduled accurately the percentage of deadline missing the tasks will exaggerate. In RTOS, when a high
priority task is allowed to preempt the low priority task which results in context switching.
Priority based scheduling is inherently a best effort approach that enables us to give better service to certain
processes. In multi-queue scheduling, priority was adjusted based on whether a task was more interactive or
compute intensive. If our task is competing with other high priority tasks, it may not get as much time as it
requires.
Priority Based Preemptive Scheduling, no schedule (predetermined activation times) is generated off-line.
Tasks are activated as response to events (e.g. arrival of a signal, message, etc.). At any given time, the
highest priority ready task is continuing running. If several tasks are ready to be activated on a processor,
the highest priority task will be executed. But, tasks can be preempted at any moment, i.e., if a task becomes

50 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 3 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
ready to be executed (the respective event has occurred), and it has a higher priority than the running task,
then the running task will be preempted and the new one will take execute.

Figure 3.4 Priority-based Preemptive Scheduling

3.5. Multi-tasking and Concurrency issue


The multitasking and inter-task communications features of the operating system allow the complex
application to be partitioned into a set of smaller and more manageable tasks. The partitioning can result in
easier software testing, work breakdown within teams, and code reuse. Complex timing and sequencing
details can be removed from the application code and become the responsibility of the operating system.
A conventional processor can only execute a single task at a time - but by rapidly switching between tasks a
multitasking operating system can make it appear as if each task is executing concurrently. This is depicted
by the diagram below which shows the execution pattern of three tasks with respect to time. The task names
are color coded and written down the left hand. Time moves from left to right, with the colored lines
showing which task is executing at any particular time. The upper diagram demonstrates the perceived
concurrent execution pattern and the lower the actual multitasking execution pattern.

51 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science

Figure 3.5: Concurrent computations of multiple tasks


How does the RTOS achieve multitasking? Well, each thread in an RTOS has a dedicated private context in
RAM, consisting of a private stack area and a thread-control-block (TCB). The context for every thread
must be that big, because in a sequential code like that, the context must remember the whole nested
function call tree and the exact place in the code, that is, the program counter. For example, in the Blink
thread, the contexts of the two calls to RTOS_delay(), will have identical call stack, but will differ in the
values of the program counter.
Every time a thread makes a blocking call, such as RTOS_delay() the RTOS saves CPU registers on that
thread's stack and updates it's TCB. The RTOS then finds the next thread to run in the process called
scheduling. Finally, the RTOS restores the CPU registers from that next thread's stack. At this point the next
thread resumes the execution and becomes the current thread.
The whole context-switch process is typically coded in CPU-specific assembly language, and takes a few
microseconds to complete. For example, a call to RTOS_delay() from Thread-A results in a context switch
to Thread-B. Thread-A switched “away” in this process stops consuming any CPU cycles, so it becomes
efficiently blocked. Instead, the CPU cycles that a primitive superloop would waste in a polling loop go to
the other Thread-B that has something useful to do.
Please note that in a single CPU system, for any given thread to run, all other threads must be blocked. This
means that blocking is quite fundamental to multitasking.
Finally, note that a context switch can be also triggered by an interrupt, which is asynchronous to the
execution of threads. For example, unblocking of Thread-A and blocking of Thread-B, can be triggered by

52 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
the system clock tick interrupt. An RTOS kernel in which interrupts can trigger context switch is called a
preemptive RTOS.

Tip3: What is superloop?


Smaller embedded systems are typically designed as a “superloop” that runs on a bare-metal CPU, without any
underlying operating system. This is also the most basic structure that all embedded programmers learn in the
beginning of their careers. For example, basic Arduino Blink Tutorial – the code is structured as an endless “while
(1)” loop, which turns an LED on, waits for 1000 ms, turns the LED off, and waits for another 1000ms. All this all
result in blinking the LED. The main characteristic of this approach is that the code often waits in-line for various
conditions, for example a time delay. “In-line” means that the code won't proceed until the specified condition is met.
Programming that way is called sequential programming. The main problem with this sequential approach is that
while waiting for one kind of event, the “superloop” is unresponsive to any other events, so it is difficult to add new
events to the loop. Of course, the loop can be modified to wait forever shorter periods of time to check for various
conditions more often. But adding new events to the loop becomes increasingly difficult and often causes an upheaval
to the whole structure and timing of the entire loop.

Bare-metal: Common name for an application that does not use a RTOS.

3.6. Handling resource sharing and dependencies


In many applications, real-time tasks need to share some resources among themselves. Often these shared
resources need to be used by the individual tasks in exclusive mode. This means that is using a resource,
cannot immediately hand over the resource to another task that requests the resource at any arbitrary point in
time; but it can do so only after it completes its use of that resource. If a task is preempted before it
completes using the resource, then the resource can become corrupted. Examples of shared resources: data
structures, variables, main memory area, file, set of registers, I/O unit, etc. Many shared resources do not
allow simultaneous accesses but require mutual exclusion. These resources are called exclusive resources.
In this case, no two threads are allowed to operate on the resource at the same time.

There are several methods available to protect exclusive resources, for example disabling interrupts and
preemption or using concepts like semaphores and mutex that put threads into the blocked state if necessary.
Each exclusive resource Ri must be protected by a different semaphore Si. Each critical section operating on
a resource must begin with a wait(Si) primitive and end with a signal(Si) primitive.

All tasks blocked on the same resource are kept in a queue associated with the semaphore. When a running
task executes a wait on a locked semaphore, it enters a blocked state, until another tasks executes a signal
primitive that unlocks the semaphore. To ensure data consistency is maintained at all times access to a
resource that is shared between tasks, or between tasks and interrupts, must be managed using a „mutual
exclusion‟ technique.

53 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
One possibility is to disable all interrupts:

This kind of critical sections must be kept very short; otherwise they will adversely affect interrupt response
times. Another possibility is to use mutual exclusion: In Free RTOS, a mutex is a special type of semaphore
that is used to control access to a resource that is shared between two or more tasks. A semaphore that is
used for mutual exclusion must always be returned:

When used in a mutual exclusion scenario, the mutex can be thought of as a token that is associated
with the resource being shared.
For a task to access the resource legitimately, it must first successfully „take‟ the token (be the token
holder). When the token holder has finished with the resource, it must „give‟ the token back.
Only when the token has been returned can another task successfully take the token, and then safely
access the same shared resource.

Figure 3.6: Processes in critical-section

Example (in algorithm)

54 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science

3.6.1. Priority inversion


The use of shared memory for implementing communication between tasks may cause priority inversion
and blocking. Therefore, either the implementation of the shared medium is “thread safe” or the data
exchange must be protected by critical sections.
Whenever two tasks want to communicate they must be synchronized for a message transfer to take place
which is called rendez-vous. They have to wait for each other, i.e. both must be at the same time ready to do
the data exchange. In case of dynamic real-time systems, estimating the maximum blocking time for a
process rendez-vous is difficult. Communication always needs synchronization. Therefore, the timing of the
communication partners is closely linked.
Look at the diagram below:

55 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Solution to Priority inversion is: Disallow preemption during the execution of all critical sections. Simple
approach, but it creates unnecessary blocking as unrelated tasks may be blocked. See diagram below, which
is solution to the above diagram.

3.6.2. Resource sharing protocols

Basic idea: Modify the priority of those tasks that cause blocking. When a task Ji blocks one or more of
higher priority tasks, it temporarily assumes a higher priority.
Specific Methods:
Priority Inheritance Protocol (PIP), for static priorities
Priority Ceiling Protocol (PCP), for static priorities
Stack Resource Policy (SRP), for static and dynamic priorities and others …
Assumptions: n tasks which cooperate through m shared resources; fixed priorities, all critical sections on a
resource begin with a wait(Si) and end with a signal(Si) operation. When a task Ji blocks one or more of
higher priority tasks, it temporarily assumes (inherits) the highest priority of the blocked tasks. We
distinguish a fixed nominal priority Pi and an active priority pi larger or equal to Pi. Jobs J1, …, Jn are
ordered with respect to nominal priority where J1 has highest priority. Jobs do not suspend themselves.

Difference between PIP and PCP

1. Priority Inheritance Protocol (PIP): is a critical resource sharing protocol which is used for sharing
critical resources among different tasks. This allows the sharing of critical resources among different
without the occurrence of unbounded priority inversions. When a task goes through priority inversion, the
priority of the lower priority task which has the critical resource is increased by the priority inheritance
mechanism.

2. Priority Ceiling Protocol (PCP): is an extension of Priority Inheritance Protocol (PIP) and Highest
Locker Protocol (HLP). It solves the problem of unbounded priority inversion of PIP, deadlock and chain

56 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
blocking of Highest Locker Protocol and also minimizes the inheritance-related inversion which was also a
limitation of HLP. It is not a greedy approach like PIP.

Algorithm:
 Jobs are scheduled based on their active priorities. Jobs with the same priority are executed in a
FCFS discipline.
 When a job Ji tries to enter a critical section and the resource is blocked by a lower priority job, the
job Ji is blocked. Otherwise it enters the critical section.
 When a job Ji is blocked, it transmits its active priority to the job Jk that holds the semaphore. Jk
resumes and executes the rest of its critical section with a priority pk=pi (it inherits the priority of the
highest priority of the jobs blocked by it).
 When Jk exits a critical section, it unlocks the semaphore and the highest priority job blocked on that
semaphore is awakened. If no other jobs are blocked by Jk, then pk is set to Pk, otherwise it is set to
the highest priority of the jobs blocked by Jk.
 Priority inheritance is transitive, i.e. if 1 is blocked by 2 and 2 is blocked by 3, then 3 inherits the
priority of 1 via 2.
Example: have a look at diagram below

Direct Blocking: higher-priority job tries to acquire a resource held by a lower-priority job
Push-through Blocking: medium-priority job is blocked by a lower-priority job that has inherited a higher
priority from a job it directly blocks

57 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science

3.7. Fault- tolerance


Fault is an erroneous state of software or hardware resulting from failures of its components. Fault-
Tolerance is the ability of a computer system to survive in the presence of faults. It aims at guaranteeing the
services delivered by the system despite the presence or appearance of faults.
The major sources of fault are listed below:
Design errors,
Manufacturing Problems,
External disturbances (as harsh environmental conditions),
System misuse,
Mechanical -- “wears out”: deterioration (wear, fatigue, corrosion) and shock (fractures, overload,
etc.)
Electronic Hardware -- “bad fabrication; wears out”: latent manufacturing defects, operating
environment (noise, heat, ESD, electro-migration) and design defects.
Software -- “bad design”: Design defects and “Code rot” -- accumulated run-time faults
People: Can take a whole lecture content...
Main aspect of Fault Tolerant Computing (FTC) is: fault detection, fault isolation and containment, system
recovery, and fault Diagnosis and Repair. Look at diagram below:

There is four-fold categorization or methods to deal with the system faults and increase system reliability
and/or availability known as fault toleration:

58 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
1. Fault Avoidance: How to prevent the fault occurrence. Increase reliability by conservative design
and use high reliability components.
2. Fault Tolerance: How to provide the service complying with the specification in spite of faults
having occurred or occurring.
3. Fault Removal: How to minimize the presence of faults.
4. Fault Forecasting: How to estimate the presence, occurrence, and the consequences of faults.

Fault – tolerance techniques: there are two techniques such as hardware fault tolerance and software fault
tolerance.

Hardware Fault-Tolerance Techniques: as listed below


 Fault Detection,
 Redundancy (masking, dynamic):
o use of extra components to mask the effect of a faulty component; (Static and Dynamic)
o Redundancy alone does not guarantee fault tolerance. It guarantees higher fault arrival rates
(extra hardware).
o Redundancy Management is Important; a fault tolerant computer can end up spending as
much as 50% of its throughput in managing redundancy.
 Detection of a failure is a challenge
o Many faults are latent that show up later
 Fault detection gives warning when a fault occurs;
o Duplication: Two identical copies of hardware run the same computation and compare each
other results. When the results do not match a fault is declared.
o Error Detecting Codes: They utilize information redundancy.
 Static and Dynamic Redundancy
 Extra components mask the effect of a faulty component. Tenancy
 Masking Redundancy; Static redundancy as once the redundant copies of an element are installed,
their interconnection remains fixed e.g. N-tuple modular redundancy (nMR), TMR (Triple Modular
Redundancy) – 3 identical copies of modules provide separate results to a voter that produces a
majority vote at its output.
 Dynamic Redundancy System configuration is changed in response to faults; its success largely
depends upon fault detection ability.

TMR Configuration: PR1, PR2 and PR3 processors execute different versions of the code for the same
application. Voter compares the results and forward the majority vote of results (two out of three).

59 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science

Figure 3.7: Hardware Fault-Tolerance Technique by TMR

Software Fault-Tolerance: Hardware based fault-tolerance provides tolerance against physical i.e.
hardware faults. How to tolerate design/software faults? It is virtually impossible to produce fully correct
software.

We need something: To prevent software bugs from causing system disasters. To mask out software bugs;
and Tolerating unanticipated design faults is much more difficult than tolerating anticipated physical faults.

Software Fault Tolerance is needed as: Software bugs will occur no matter what we do. No fully dependable
way of eliminating these bugs. These bugs have to be tolerated.

Software fault-tolerance uses design redundancy to mask residual design faults of software programs.
Software fault tolerance strategy as, defensive programming, if you cannot be sure that what you are doing
is correct; do it in many ways - review and test the software, verify the software, execute the specifications,
and produce programs automatically.

Software Fault-tolerance is based on HW Fault-tolerance. Software Fault Detection is a bigger challenge, as


mentioned above, that many software faults are of latent type that shows up later. It can use a watchdog to
figure out if the program is crashed and change the specification to provide low level of service – Write new
versions of the software throw the original version away, Use all the versions and vote the results, N-version
programming (NVP), Use an on-line acceptance test to determine which version to believe, Recovery Block
Scheme.

3.8. Synchronization techniques

Introduction: Most RTOS provides many types of mechanisms for safe communication and
synchronization in between tasks and between interrupt routines and tasks. Although tasks in an embedded
application have a degree of independence, it does not mean that they have no “awareness” of one another.
However some tasks will be truly isolated from others, the requirement for communication and

60 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
synchronization between tasks is very common. This represents a key part of the functionality provided by
an RTOS known as synchronization.

There are three broad paradigms for inter-task communications and synchronization:
1. Task-owned facilities – attributes that an RTOS imparts to tasks that provide communication
(input) facilities. For example signals.
2. Kernel objects – facilities provided by the RTOS which represent stand-alone communication or
synchronization facilities. Examples include: event flags, mailboxes, queues/pipes, semaphores
and mutexes.
3. Message passing – a rationalized scheme where an RTOS allows the creation of message objects,
which may be sent from one task to another or to several others. This is fundamental to the kernel
design and leads to the description of such a product as being a “message passing RTOS”.

3.8.1. Clock Synchronization Algorithms


Clock synchronization is a method of synchronizing clock values of any two nodes in a distributed system
with the use of external reference clock or internal clock value of the node. During the synchronization,
many factors were affected on a network; these factors need to be considered before correcting actual clock
value.

Issues in Clock Synchronization are: A simple method of clock synchronization is that each node has to
send a request message „time=?‟ to the real-time server. The node gets a reply message with „time=t‟. This
method has following two issues:

a) The ability of each node to read another node‟s clock value. This can raise errors due to delay in message
communication between nodes. Delay can be computed by computing the time needed to prepare, transmit
and receive an empty message in the absence of transmission errors and system load.

b) Time must never run backward since it may lead to the repetition of events or transactions creating
disorder in the system. Time running backward is just a perception, not actually it goes backward.

Based on the above approach, clock synchronization algorithms are divided as Centralized Clock
Synchronization and Distributed Clock Synchronization Algorithm.
i. Centralized Clock Synchronization Algorithms
1. Passive Time Server: Each node periodically sends a request message „time=?‟ to the time-server to get
accurate time. Time-server responds with „time=t‟. Assume that client node send a message at time t0 and
get a reply at time t1, then message propagation time from server to client node is (t1-t0)/2. Client node

61 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
receives a reply and it adjusts its clock time to t + (t1-t0)/2. For a more accurate time, there is need to
calculate accurate message propagation time. There are two methods proposed to improve estimated time
value.
i) Additional information available: Assume that to handle interrupt and to process request message sent
by the client, time server takes time ti. Hence, better estimated time is (t1-t0-ti)/2. Therefore, clock is
readjusted to
t + (t1-t0-ti)/2.
ii) No additional information available (Cristian Method): This method uses time-server connected to a
device that receives a signal from a source of UTC (Coordinated Universal Time) to synchronize nodes
externally. A simple estimated time t + (t1-t0)/2, is accurate if nodes are on same network. For nodes on
different network the minimum transmission time can be estimated as follows:
The time tmin refers the time needed by server to prepare reply message. And the same time tmin is needed by
client process to dispatch reply message as shown in the figure below.

Figure 3.8: Cristian‟s Method

When the reply message arrives, the time taken by server‟s clock is in the range of (t1-t0)/2 to 2tmin.
Therefore, accuracy of clock time is ± (t1-t0)/2 to 2tmin. There are some limitations, these are:

i. There is a single time server that might fail and thus synchronization temporarily unavailable.
ii. There may be faulty time-server that reply with spurious time or an imposter time-server that replied
with incorrect time.

UTC (Coordinated Universal Time) is the primary time standard by which all time zones are based
and the world regulates clocks and time within about 1 second of mean solar time at 00 longitudes; a
successor of GMT.

2. Active Time Server: Time server periodically broadcasts its clock time as „time=t‟. Other nodes receive
message and readjust their local clock accordingly. Each node assumes message propagation time = ta,
and readjust clock time = t + ta. There are some limitations as follows:
i. Due to communication link failure message may be delayed and clock readjusted to incorrect time.

62 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
ii. Network should support broadcast facility.
3. Berkeley Algorithm: This algorithm overcomes limitation of faulty clock and malicious interference in
passive time server and also overcomes limitation of active time server algorithm. Time server
periodically sends a request message „time=?‟ to all nodes in the system. Each node sends back its time
value to the time server. Time server has an idea of message propagation to each node and readjust the
clock values in reply message based on it. Time server takes an average of other computer clock‟s value
including its own clock value and readjusts its own clock accordingly. It avoids reading from unreliable
clocks. For readjustment, time server sends the factor by which other nodes require adjustment. The
readjustment value can either be +ve or –ve. how here are also the following limitations:
i. Due to centralized system single point of failure may occur.
ii. A single time-server may not be capable of serving all time requests from scalability point of view.
The readjustment value can either be +ve or –ve.

ii. Distributed Clock Synchronization Algorithms


There is no centralized or reference time-server. It performs clock synchronization based on internal clock
values of each node with the consideration of minimum clock skew value among clocks of different nodes
in the system. Distributed clock synchronization algorithm overcomes issue of scalability and single point of
failure as there is no common or global clock required. Processes make decisions based on local information
and relevant information distributed across machines.
Global Averaging Algorithm: Each node in the system broadcast its local clock time in the form of special
„resync‟ message when its local time is t0 + IR, where I is for interrupt time and R includes number of nodes
in the system, maximum drift rate etc. After broadcasting, clock process of the node waits for some time
period t. During this period, it collects „resync‟ message from other nodes and records its receiving time
based on its local clock time. At the end of waiting period, it estimates the skew of its own clock with
respect to all other nodes.
To find the correct time, need to estimate skew value as follows:

a) Take the average of estimated skew and use it as correction of local clock. To limit the impact of
faulty clock skews greater than threshold are set to zero before computing average of estimated skew.
b) Each node limits the impact of faulty clock by discarding highest and lowest estimated skews and
then calculates average of estimated skew. There are some limitations: i) this method does not scale
well,
ii) Network should support broadcast facility, iii) Due to large amount of messages network traffic
increased. Therefore this algorithm may be useful for small networks only.
63 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Localized Averaging Algorithm: This algorithm overcomes limitation of scalability of global averaging
algorithm. Nodes of the system are arranged logically in a specific pattern like a ring or grid. Periodically
each node exchange its local clock time with its neighbors and then sets its clock time to the average of its
own clock time and the clock time of its neighbors. We have discussed set of physical clock synchronization
algorithms. These algorithms can be implemented with the use of time protocol.

RTOS support for semaphores, queues, and events

1. Semaphore
A signal between tasks or interrupts that does not carries any additional data. The meaning of the signal is
implied by the semaphore object, so you need one semaphore for each purpose. The typical design pattern is
that a task contains a main loop with a RTOS call to “take” the semaphore. If the semaphore is not yet
signaled, the RTOS blocks the task from executing further until some task or interrupt routine “gives” the
semaphore, i.e., signals it. Therefore, A semaphore allows resource management. A task can block on a
sem_wait() until a resource becomes available (via a sem_post()).
A binary semaphore for mutual exclusion between tasks is called Mutex, which is used to protect a critical
section. Internally it works much the same way as a binary semaphore, but it is used in a different way. It
had “taken” before the critical section and “given” right after, i.e., in the same task. A mutex typically stores
the current “owner” task and may boost its scheduling priority to avoid a problem called “priority
inversion”.

Semaphores are independent kernel objects, which provide a flagging mechanism that is generally used to
control access to a resource. There are broadly two types: binary semaphores (that just have two states)
and counting semaphores (that have an arbitrary number of states). Some processors support (atomic)
instructions that facilitate the easy implementation of binary semaphores. Binary semaphores may also be
viewed as counting semaphores with a count limit of 1.

Any task may attempt to obtain a semaphore in order to gain access to a resource. If the current semaphore
value is greater than 0, the obtaining will be successful, which decrements the semaphore value. In many
OS, it is possible to make a blocking call to obtain a semaphore; this means that a task may be suspended
until the semaphore is released by another task. Any task may release a semaphore, which increments its
value.
2. Queues

64 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Queues are independent kernel objects that provide a means for tasks to transfer messages. They are a little
more flexible and complex than mailboxes. The message size depends on the implementation, but will
normally be a fixed size and word/pointer oriented.

Any task may send to a queue and this may occur repeatedly until the queue is full, after which time any
attempts to send will result in an error. The depth of the queue is generally user specified when it is created
or the system is configured. In many RTOS, it is possible to make a blocking call to send to a queue; this
means that, if the queue is full, a task may be suspended until the queue is read by another task. Any task
may read from a queue. Messages are read in the same order as they were sent – first in, first out (FIFO). If
a task tries to read from an empty queue, it will receive an error response. Again, in many RTOS, it is
possible to make a blocking call to read from a queue; this means that, if the queue is empty, a task may be
suspended until a message is sent to the queue by another task.

A RTOS will probably support the facility to send a message to the front of the queue – this is also termed
“jamming”. Some RTOS also support a “broadcast” feature. This enables a message to be sent to all the
tasks that are suspended on reading a queue. Additionally, an RTOS may support the sending and reading of
messages of variable length; this gives greater flexibility, but carries some extra overhead.

Many RTOS support another kernel object type called “pipes”. A pipe is essentially identical to a queue, but
processes byte-oriented data.
3. Mailboxes as an event

Mailboxes are also independent kernel objects, which provide a means for tasks to transfer messages. The
message size depends on the implementation, but will normally be fixed. One to four pointer-sized items are
typical message sizes. Commonly, a pointer to some more complex data is sent via a mailbox. Some kernels
implement mailboxes so that the data is just stored in a regular variable and the kernel manages access to it.
Mailboxes may also be called “exchanges”, though this name is now uncommon.

Any task may send to a mailbox, which is then full. If a task then tries to send to a full mailbox, it will
receive an error response. In many RTOS, it is possible to make a blocking call to send to a mailbox; this
means that a task may be suspended until the mailbox is read by another task. Any task may read from a
mailbox, which renders it empty again. If a task tries read from an empty mailbox, it will receive an error
response. In many RTOS, it is possible to make a blocking call to read from a mailbox; this means that a
task may be suspended until the mailbox is filled by another task.

Some RTOS support a “broadcast” feature. This enables a message to be sent to all the tasks that are
currently suspended on reading a specific mailbox.

65 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Certain RTOS do not support mailboxes at all. The recommendation is to use a single-entry queue instead.
This is functionally equivalent, but carries additional memory and runtime overhead.

Chapter 4: Embedded Systems Design Issues


4.1. Introduction
An embedded system can be thought of as a computer hardware system having software embedded in it. An
embedded system can be an independent system or it can be a part of a large system. An embedded system
is a microcontroller or microprocessor based system which is designed to perform a specific task. For
example, a fire alarm is an embedded system; it will sense only smoke.

An embedded system has three components − It has hardware, application software and Operating system
(RTOS or EOS) that supervises the application software and provide mechanism to let the processor run a
process as per scheduling by following a plan to control the latencies. RTOS defines the way the system
works. It sets the rules during the execution of application program. A small scale embedded system may
not have RTOS. Software is used for more features and flexibility and hardware is used for performance and
security. So we can define an embedded system as a Microcontroller based, software driven, and reliable,
real-time control system. Diagrammatically an embedded system can be represented as follows:

Figure 4.1 Basic Structure of an Embedded System


Embedded systems are basically designed to regulate a physical variable, for example – Microwave Oven,
or to manipulate the state of some devices by sending some signals to the actuators or devices connected to
the output port system (such as temperature in Air Conditioner), in response to the input signal provided by

66 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
the end users or sensors which are connected to the input ports. Hence the embedded systems can be viewed
as a reactive system.

The control is achieved by processing the information coming from the sensors and user interfaces and
controlling some actuators that regulate the physical variable. Keyboards, push button, switches, etc. are
Examples of common user interface input devices and LEDs, LCDs, Piezoelectric buzzers, etc. are
examples for common user interface output devices of a typical embedded system. The requirement of type
of user interface changes from application to application, based on domain.

Some embedded systems do not require any manual intervention for their operation. They automatically
sense the input parameters from real world through sensors which are connected at input port. The sensor
information is passed to the processor after signal conditioning and digitization. The core of the system
performs some predefined operations on input data with the help of embedded firmware in the system and
sends some actuating signals to the actuator connect connected to the output port of the system.

4.2. Memory management


Memory is an important part of an embedded system. The memory used in embedded system can be either
Program Storage Memory (ROM) or Data memory (RAM). Certain Embedded processors/controllers
contain built in program memory and data memory and this memory is known as on-chip memory. But, it
does not contain sufficient memory inside the chip and requires external memory called off-chip memory or
external memory.

The memory of the system is responsible for holding the code (control algorithm and other important
configuration details). Fixed memory (ROM) is used for storing code or program. The user cannot change
the firmware in this type of memory. The most common types of memories used in embedded systems for
control algorithm storage are OTP, PROM, UVEPROM, EEPROM and FLASH. An embedded system
without code (i.e. the control algorithm) implemented memory has all the peripherals but is not capable of
making decisions depending on the situational as well as real world changes.

Memory for implementing the code may be present on the processor or may be implemented as a separate
chip interfacing the processor. In a controller based embedded system, the controller may contain internal
memory for storing code such controllers are called Micro-controllers with on-chip ROM, e.g. Atmel
AT89C51. **OTP: One Time Password **

Memory selection for Embedded Systems:

67 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Selection of suitable memory is very much essential step in high performance applications, because the
challenges and limitations of the system performance are often decided upon the type of memory
architecture. Systems memory requirement depend primarily on the nature of the application that is planned
to run on the system.

Memory performance and capacity requirement for low cost systems are small, whereas memory throughput
can be the most critical requirement in a complex, high performance system.

Following are the factors that are to be considered while selecting the memory devices,

 Speed  Power consumption


 Data storage size and capacity  Cost
 Bus width

Memory Management in RTOS:

 The memory management function of an RTOS kernel is slightly different compared to the General
Purpose Operating Systems
 The memory allocation time increases depending on the size of the block of memory needs to be
allocated and the state of the allocated memory block (initialized memory block consumes more
allocation time than uninitialized memory block)
 Since predictable timing and deterministic behavior are the primary focus for an RTOS, RTOS
achieves this by compromising the effectiveness of memory allocation
 RTOS generally uses „block‟ based memory allocation technique, instead of the usual dynamic
memory allocation techniques used by the GPOS.
 RTOS kernel uses blocks of fixed size of dynamic memory and the block is allocated for a task on a
need basis. The blocks are stored in a „Free buffer Queue‟.
 Most of the RTOS kernels allow tasks to access any of the memory blocks without any memory
protection to achieve predictable timing and avoid the timing overheads.
 RTOS kernels assume that the whole design is proven correct and protection is unnecessary. Some
commercial RTOS kernels allow memory protection as optional and the kernel enters a fail-safe
mode when an illegal memory access occurs
 A few RTOS kernels implement Virtual Memory concept for memory allocation if the system
supports secondary memory storage (like HDD and FLASH memory).

68 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
 In the „block‟ based memory allocation, a block of fixed memory is always allocated for tasks on
need basis and it is taken as a unit. Hence, there will not be any memory fragmentation issues.
 The memory allocation can be implemented as constant functions and thereby it consumes fixed
amount of time for memory allocation. This leaves the deterministic behavior of the RTOS kernel
untouched.

Memory organization of Processes:


allocated for the process (Depending on the OS
The memory occupied by the process is
kernel implementation).
segregated into three regions namely; Stack
memory, Data memory and Code memory.

 The „Stack‟ memory holds all temporary


data such as variables local to the process.
 Data memory holds all global data for the
process.
 The code memory contains the program
code (instructions) corresponding to the
process.

On loading a process into the main memory, a


specific area of memory is allocated for the
process; the stack memory usually starts at the
Figure 1 Memory organization of a Process
highest memory address from the memory area

4.3. Hardware development


The main hardware components of embedded system can be processor, power source, clocking circuit,
memory, timer, serial port, parallel port, interrupt controller and others necessary devices.

A processor: is the heart of the embedded system. Processor can be of the different categories; mostly we
use microprocessor or microcontroller.

Power Source: We know that to operate any system we require power to operate the system; this power
can be provided using three possible methods:

69 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
- The system will have its own power supply or if it is part of another larger system then it will be
using power from the larger system.
- Supply from a system to which the embedded system interfaces, for example in a network card
- Charge pump concept used in a system of little power in some cases, if take ATM cards which is
type of embedded system. So in the ATM card at the moment we insert the card in ATM machine, it
will get power so we called at charge pump. So this charge pump will provide the necessary power.
Clock: The clock issued to provide the synchronization or timely execution of the instructions. The
oscillating circuit will be used to generate the main clock.

Memory: Various forms of system memory are present in embedded system which are: Internal memory at
microcontroller, RAM at system on chip or external RAM, Flash / EEPROM, Internal caches at
microprocessor, External RAM chips, ROM / PROM

Timer: Embedded system often requires mechanisms for counting the occurrence of even and for
performing tasks at regular intervals. Embedded system is time bounded for counting the number of events
or time between the events. The embedded system requires the mechanism for performing the tasks at
regular intervals. Tasks should be completed at specific time period. Timer is used for generating delay and
for generating waveforms with specific delays. So these are the main functions or operation performed by
the timers.

Serial Port: A serial port is a serial communication interfaces which information transfer in or out one bit
at a time. Common serial protocols include UART, SPI, SCI and I2C.

Parallel Port: A parallel port is used for connecting peripherals. The name refers to the way the data is sent
parallel ports send multiple bits of data at once. A parallel port requires multiple data lines in their cables
and port connectors and tends to be larger than contemporary serial ports.

The basic structure and the major functional interfacing hardware of real-time and
embedded systems include the following components:

1. Sensor- Sensors are also known as Embedded Sensors are part of a computer device/system that is
placed inside a device or any electronic instrument like an A2D converter which helps in interacting
with the surrounding environment
2. A-D converter- it is an analog to digital converter which converts analog signals to digital signals that
are sent by the sensor.
70 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
3. Processor and ASICs- processors measure the data to measure the required result and store it in
memory.
4. D-A converter– It is a digital to analog converter which converts digital signals to analog signals
which are fed by the processor
5. Actuator- An actuator compares the result given by the D-A converter to the actual or expected result
stored in it and stores the approved output.

Application Specific Integrated Circuit (ASIC): is other basic part of real-time and embedded
system‟s hardware just as motherboard for general purpose computer.

ASIC is a microchip designed to perform a specific or unique application. It is used as replacement to


conventional general purpose logic chips. It integrates several functions into a single chip and thereby
reduces the system development cost. Most of the ASICs are proprietary products. As a single chip, ASIC
consumes very small area in the total system and thereby helps in the design of smaller systems with high
capabilities/functionalities. ASICs can be pre-fabricated for a special application or it can be custom
fabricated by using the components from a re-usable “building block” library of components for a
particular customer application.

Figure 4.2 Typical ASCI of Real-time and Embedded system


Fabrication of ASICs requires a non-refundable initial investment (Non-Recurring Engineering (NRE)
charges) for the process technology and configuration expenses. If the Non-Recurring Engineering Charges
(NRE) is born by a third party and the Application Specific Integrated Circuit (ASIC) is made openly
available in the market, the ASIC is referred as Application Specific Standard Product (ASSP). The ASSP is
marketed to multiple customers just as a general-purpose product, but to a smaller number of customers
since it is for a specific application.

Programmable Logic Devices (PLDs)

71 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Logic devices provide specific functions, including device-to-device interfacing, data communication,
signal processing, data display, timing and control operations, and almost every other function a system
must perform. It can be classified into two broad categories - Fixed and Programmable. The circuits in a
fixed logic device are permanent, they perform one function or set of functions - once manufactured, they
cannot be changed.

Programmable logic devices (PLDs) offer customers a wide range of logic capacity, features, speed, and
voltage characteristics - and these devices can be re-configured to perform any number of functions at any
time. Designers can use inexpensive software tools to quickly develop, simulate, and test their logic designs
in PLD based design. The design can be quickly programmed into a device, and immediately tested in a live
circuit. PLDs are based on re-writable memory technology and the device is reprogrammed to change the
design. Field Programmable Gate Arrays (FPGAs) and Complex Programmable Logic Devices (CPLDs) are
the two major types of programmable logic devices. (Read for detail of them from any sources)

4.4. Embedded Software development

Embedded software is a type of software that is permanently installed in any hardware or non-PC-device
and it is uniquely written and coded for that particular hardware only. The main objective of the software is
to lay the course of action for the operation of the device. It is usually found in GPS devices, smart watches,
factory robots, calculators, and other such devices. Nowadays most of the self-working technologies contain
these embedded software also these software are even practically implemented in industries like Banking,
Telecom, Consumer Electronics, Automotive, etc. and this is the reason why it is gaining too much
popularity.
Moreover, embedded software must be immune to changes in its operating environment – processors,
sensors, and hardware components may change over time. Other challenging requirements to embedded
software are portability and autonomy.

Embedded software is classified on two bases: Based on performance and functional requirements and
Based on the performance of micro-controllers. Let us study more about both of the types.

Depending on the complexity of device, all embedded solutions use microcontrollers, microprocessors, or
other special-purpose processing units like digital signal processors (DSP) at the heart of their dedicated
hardware.

Challenges of Embedded Software Development

72 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Embedded systems can be experienced in almost every industry such as aerospace, automotive, consumer
electronics, banking, security, etc. They are well known for their accuracy, reliability, and performance
speed with less power consumption and can be used for a wider variety of applications.

Many manufacturers use embedded software for consumer products such as telephones, robots, modems,
cars, toys, security systems appliances, televisions, digital watches, etc. Embedded Software helps the
hardware in a device to perform as per our needs and requirements. Embedded software is also known as
firmware that resides in the hardware to create a single embedded system. It helps to perform mission-
critical applications such as electronic control units, industrial automation equipment, and anti-lock brakes
in cars, etc. However, Embedded Software also poses some challenges that development organizations need
to deal with. Let us discuss some of the challenges of Embedded Software Development.

1. Stability – Any unexpected behavior from an embedded system creates havoc and is responsible for
serious risks. It should have uniform behavior throughout all circumstances and stability is of enormous
importance.
2. Safety – Safety is of the utmost vital challenge of embedded systems as they are used in many critical
and lifesaving functionalities in any type of critical environment. It is characterized by limitations and
strict requirements in terms of testing, quality, and engineering expertise.
3. Connectivity – This is one of the vital challenges that embedded software developers face as there are
so many ways to connect to the internet, the developers could connect through Ethernet, Wi-Fi, LoRa,
Cellular, a Bluetooth bride, and other sources. Each source has its own merits and demerits along with
different software stacks that developers understand to make sure the hardware works.
4. Security – It is a challenge for developers to secure their devices from any kind of security threats that
are evolving in more complexity. As IoT devices are gaining popularity globally, the related risks are
growing exponentially as the devices are interconnected to each other. There are many chances of
hacking attacks and the challenge of what information needs to be protected should be clearly
identified.
5. Over-the-air-Updates – When the device is finally connected to the internet, developers can update
its firmware remotely. With the help of IoT, software updates can be performed by the customers.
However, in any deployment that includes several thousand devices, the developers have to focus on:
 Generate a firmware update
 Save it to all the devices
 Ensure and validate they are delivered from a trusted source

73 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
 Run that particular update on the devices at the appropriate time
 Be always ready to roll back updates immediately if there is an issue.
Avench (avench.com is one of embedded system software developer), helps in developing embedded
software and systems for various domains and industries. (Privileging - This study material partially
adopted from their website)

6. Debugging: The complexity increases as more and more teams start building connected devices.
Embedded surveys have shown that developers on average spend 40% of their time debugging. So that,
debugging takes a lot of time and money on every embedded project. For reasons like these, developers
must understand all the debugging techniques available and the ways to prevent them in the first place.
7. Pace of change: The change that took place in recent years is almost remarkable. In the last 5 years, it
has thereby witnessed fast development on the emerging technologies mainly with artificial
intelligence. The challenges arise in the embedded system as for the developers; the available
technology is changing at a very fast pace, faster than they can even grab the knowledge about it.
8. Design limitations: The real challenge comes when the designers must add more processing power
and much longer battery life into small spaces. Also, a lot depends on the application of IoT devices.
All this time, we can see a growing demand for inexpensive, and low-power, with the configurable
processors highly compatible instruction sets and that is the greatest design limitation. There is similar
demand for increased performance of system buses and internal/external memory caches.
Languages used for embedded software development

These permanent built-in programs are very different from computer-based programs and require a wide
range of tools and systems for programming the software and operation of the system. Some of the
languages that are used in these embedded systems are as follows:

C and C++ languages– these are traditionally popular languages that were used to develop softwares.
These have low-level access to memory that makes them quite suitable. In addition to this, they are
relatively fast and little memory is consumed by them.

Java– Java is used to write embedded systems that are portable and extensible. These systems can be ported
to different platforms because of their WORA functionality. These applications are compatible with
different types of hardware.

74 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Python – Python is a popular language and is chosen because of its writ ability, concise, readable coding
style, and error deduction. Python can gather, store and analyze tons of data from real-time embedded
systems.

Rust– Because of high performance, multiple safety features, a type state programming style as well as
zero-cost abstractions which makes it ideal for embedded systems programming.

There are many other languages such as Ada, embedded C++, Verilog, Lua, etc. which are quite suitable for
writing embedded system softwares.

Era of Embedded Software

Despite the challenges and issues developers are facing on the way of transition from standalone to
connected devices, they prove to find efficient solutions on the go.

Along with variability, configurability, extendibility, and changeability, the development of embedded
solutions brings furthermore functionality into smart devices, which are now taken for granted. Their
potential can be further enhanced with the 5G roll-out that brings the real-time insights as well as robust
communication infrastructure crucial for embedded type of systems.

Soon, we can expect embedded software to dwell everywhere from smart watches to smart houses to
ubiquitous control systems with the total market volumes climbing to 116.2 billion USD by 2025. In
parallel, modern technologies are increasing the efficiency of power sources.

Embedded software is getting more and more independent from the hardware they reside in. They will no
longer be defined by hardware limitations becoming capable of doing any function to achieve their
objectives on any programmable logic device, whether a micro-controller, a microprocessor, a signal
processor, a neural network or biological assembly.

The embedded software is here to stay contributing to the growing quality of life and opening new
opportunities to improving living standards across the planet.

Info-pulse (one embedded system developers) provides full-cycle embedded software development services
and outsourcing, including firmware development, embedded software and hardware testing, and integration
with third-party systems. Their embedded software engineers offer in-depth knowledge of C/C++ and other
low-level programming languages to deliver only best-in-class embedded solutions for companies from

75 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Automotive, Hi-Tech, Manufacturing, Telecom, and other industries. (Privileging that – This study
materials (software development) were obtained partially from their website).

Tip: Further Reading Material:

Embedded system development tools: It is a computer program that combines one or more
object code files and libraries together into one single
As we have discussed that an embedded system is a
final program as we know it is important to write a
combination of computer hardware and software
large program into small parts and functions in order to
designed for specific functions within a large system.
make the coding easy and these small parts in the end
So all kinds of embedded systems need software‟s to
need to be combined to one single final program which
run them for performing specific functions, for example
requires a linker.
a microcontrollers contains software for handling or
operations. In order to develop this software a number Libraries:

of different tools are needed. These tools include editor, In order to develop a program in C or C++ or other
compiler, assembler, debugger, and simulator. language we need some predefined variables and

Editor: functions. These functions and variables are divided


into related groups called Header File and library are
Editor is first tool that you will need in order to write
collections of these header files for example in order to
and edit the source code of embedded system
control raspberry pi GPIO general purpose input/output
application and the code. The code is written in a
by python for example you will need a module and the
programming language either C, C++ or assembly
library that calls our RPI.GPIO, so before writing the
language.
main program you need to import a module.
Compiler / Assembler:
Debugger:
Once you are done with your source code you need to
Debugger as the name suggests is a tool that is used for
convert that code to machine language which
debugging your code. It is a software program used to
microcontroller or microprocessor will operate and the
test and find bugs and error in your code also it detects
difference between compiler and assembler is that the
the location where the error occurs. So you can easily
assembler converts assembly language to machine
do the corrections, so I think you get to know how
language and on the other hand compiler converts a
important debugger is in development of software
high-level language as C or C++ to low-level language
embedded systems.
which is machine language.
Simulator:
Linkers:

76 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
The simulator enables you to know how the code that this IDE allows you to discover configure develop the
you created actually works in reality and using debug and qualify with the system software for most of
simulator you can implement your code in microchips microcontrollers an also we have KEIL
microprocessors and microcontrollers and also you may ARM which is an IDE for a wide range of ARM
be able to see interactions of sensors and actuators with Cortex and based microcontroller devices, this software
your code, an example of simulator is Proteus which includes compiler, assembler, linker, debugger, and
it‟s used for simulation of microcontroller based simulator and one of the most important tools
projects and microprocessor based projects. for embedded system software developers.
MATLAB comes very handy when it comes
Example of embedded system development
to embedded systems it‟s one of the best tools
tools:
for embedded system engineers to develop and test
IDE (integrated development environment): An their design to make sure design works as expected
IDE is software that contains all the necessary tools MATLAB gives you Simulink for model-based
required for embedded systems software development. environment an embedded coder in Simulink helps you
An IDE normally contains an editor, compiler, to generate a well optimized code for the embedded
debugger and it also provides a user interface, it‟ controllers which reduce time to manually write the
depends on what kind of microcontroller software and also you have options to test cases in
microprocessor you use. You can choose from many hardware and software in runtime and obtain the real-
software development tools and IDE for example there time results.
is MPlab, MPlab is an IDE from microchip technology,

77 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science

Chapter 5: Real-Time Communication


5.1. Basic Concepts and Examples of Real-Time Communication
Real-time communication is any online communication that happens in real-time. Data is sent directly
and instantly from the sender to the receiver and is not stored en route to the destination. The telephone is
just one classic example of a real-time communication. This is in contrast to a time shifting
communication, where we send data and typically wait a period for it to be received, as well as another
period before we get a reply. Snail mail and e-mail are perfect examples of this type of communication.

In today‟s “instant” economy, real-time communication is more critical than ever. This technology has
single-handedly transformed our world into a global society. We can now communicate and exchange
information rapidly, over great distances, like never before. We can do it as if the other person is in the
same room as us.

In the world of web applications, everyone loves the scenario where one app requests an action and
another app responds to that request afterward. This basic yet efficient pattern is, in most cases,
everything that your web applications need to function properly. However, what happens if one app
cannot wait for another app to finish its action and then send a response? In instant messaging and video
conference calls, for example, both apps need to be perfectly synchronized with one another so that they
can communicate in real-time. This is where real-time communication web technologies come into the
spotlight.

Real-time communication (RTC) web technology is a term used to refer to any web technology that
enables live communication between two or more peers over the Internet without transmission delays.
Lately, the leader in this field has been one modern, promising and powerful technology called Web-
RTC.

Real-Time communication often characterized as elastic application and it is important that the message
arrives in a timely manner. Timeliness may be more important than reliability; messages may have

priority; for examples: packet voice and telephony applications.

Real-Time Communication Examples: There are plenty of examples of real-time communication in


modern society such as:

78 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
 The most obvious example is when we call and talk with each other through mobile phones. It
also includes Voice-Over-IP (VoIP) applications that route calls through the Internet instead of a
telephone network, such as Skype.
 Real-time messaging definitions also classify real-time messaging applications like Viber and
WhatsApp as real-time communication. Whenever we send a message through these apps, it is
sent directly to the recipient. We can freely reply with each other in real-time.
 Beyond talking with each other, real-time communication can also be used to exchange data files.
It‟s often done through peer-to-peer networks like BitTorrent.
 Other notable applications of real-time communication include screen sharing, social media, and
live video streaming platforms such as Facebook Messenger.
Importance of Real-Time Communication: Real-time communications are a necessary part of our
fast-paced modern society. Humans are social creatures that are built to talk to each other face-to-face.
Real-time communication tools enable us to do this or close to it even if miles separate us.

The advent of real-time communication allows us to exchange critical information instantly. It has
improved the level of service people get in their everyday lives. Customer complaints, for instance, can
be sent to the company and resolved in a matter of hours, not days. Real-time communication can also
save lives during emergencies by coordinating resources where they are needed the most.

Today, real-time communication is a critical component of almost every business and industry on Earth,
from the local hospital to online education.

Businesses also have been reaping the benefits of real-time communication for years. It enables them to
give faster and better service to their customers.

Real-time communications tools have enabled companies to function much more efficiently. Employees
can now work anywhere in the world, thanks to video conferencing and collaboration tools like Slack
and Skype. It saves everyone from having to commute to and from work, increasing company
productivity.

Even in traditional businesses with physical officers, real-time messaging allows teams to coordinate and
get things done faster.

Real-time communication is also at the heart of some of the more innovative services to date. Platforms
like Uber directly connect commuters and drivers on the road, in real-time, for their mutual benefit.

79 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
The Future of Real-Time Communication: Real-time communication definitions are blurring all
the time. There are lots of trends that will heavily influence what this technology will be like in the near
future.

One of these trends is the use of WebRTC, or Web-based Real-Time Communications. It enables
HTML5-enabled browsers to make use of real-time communication technology, without the need of a
third-party plug-in. With the more widespread use of real-time communications on mobile phones, there
is a concern around it straining most networks. The focus now is on how to scale these networks
efficiently and make them more secure.

There is also the increased growth and development of Over The Top (OTT) services. For example,
platforms like Netflix enable services to be offered directly to consumers via the Internet, bypassing
distribution platforms like cable and phone networks.

As a leader in real-time voice and video chat and streaming, Agora is at the forefront of adapting this
technology to seamlessly integrate with the most important aspects of our technological lives. For
example, Live Classroom Streaming makes real-time engagement a powerful tool for the future of
education. Agora‟s Interactive Streaming offers innovative businesses uses as well, allowing retailers to
connect with and engage their customers in new and meaningful ways. When it comes to Agora and the
future of real-time engagement, the possibilities are endless and will only improve over time and impact
our lives for the better. At Agora, we aim to spearhead this trend with our innovative video chat API,
voice API, audio streaming products, and more. [N.B: Read from online for more detail about Agora, for
your deep knowledge]

Tip:- WebRTC is a free, open-source technology that provides browsers and mobile applications with
real-time communication (RTC) capabilities through simple application programming interfaces (APIs).
It allows direct peer-to-peer audio and video communication, eliminating the need to install any
additional plugins or native apps. It was created by Google in May 2011 as an open-source technology
for browser-based real-time communication. Currently, WebRTC is fully stable, its last release (v1.0)
being made in May 2018. It’s standardized through the World Wide Web Consortium (W3C) and the
Internet Engineering Task Force (IETF) and is supported by Apple, Google, Microsoft, Mozilla, and
Opera. It is powerful real-time communication technology for modern web and mobile apps.

5.2. Bounded Access protocol for LAN

80 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
Two popular examples of bounded access protocols are: i. IEEE 802.4 and ii. RETHER.
i. IEEE 802.4:
It is a protocol that can be used in both token-ring and token bus networks. Token ring and token bus has
special advantages in manufacturing situations, have been in use, in manufacturing automation for quite
some time.

It is also known as the timed token protocol, node can transmit only when it holds the token. A token
visits every node and only when a node holds the token, it can transmit, and one start transmitting and the
duration for which it can transmit are bounded. It is not necessary that every node transmits for similar
amounts of time depending on traffic at node, the time that node is permitted to transmit can vary but still
one side has been assigned, we know that which node to transmit maximum for one duration and based
on that we can do some computation to find out what will be the maximum delay before providing real-
time guarantees. So, we can compute the maximum priority inversion time and which is very important
in real-time application. Because, based on this, we can design the system.

One of the important parameters of IEEE 802.4 protocol is Target Token Rotation Time (TTRT) which
is expected time between two consecutive visits of a token to a node. It is not the exact time the two of
two visits of a consecutive node, it can be less than or more than this.

This is an important design parameter, initialized during the network setup. Real-time messages are
assumed to be periodic, and this is known as synchronous messages, while non-real-time messages are
called asynchronous messages. If there is no synchronous message, the node transmits asynchronous
messages.

81 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Figure 5.1 TTRT simulation
Hawassa University - IOT Faculty of Informatics Department of Computer Science

When a node receives the token, it transmits real-time messages and after completing transmission of
synchronous message or traffic; the node can transmit asynchronous messages. This is possible only if
the token arrived early at the node.

For example: let the synchronous bandwidth of a node Ni be Hi, and let θ be the propagation time; then

Asynchronous overrun: Time for which non-real-time messages are transmitted; we have to be aware of
this. The worst case time between two successive visits of token to a node is 2*TTRT, due to
asynchronous overrun. For a node using only synchronous node – time between consecutive token
arrivals is limited by TTRT.

Assume that node Ni has the message with the shortest deadline Δ, TTRT should be set to a value lower
than Δ/2 during network initialization. Why? This is because, token can arrive at a node in a worst case
2*TTRT later. So, it can arrive Δ time later and then it starts transmitting.

ii. RETHER:
This stands for Real-time Ethernet that switches between Ethernet and a token ring protocol to provide
timing guarantees to real-time application. For non-real-time messages, Ethernet is preferable and good
since it leads to higher channel utilization.

In RETHER, transmissions occur in two modes such as CSMA/CD and RETHER. The protocol switches
seamlessly to RETHER mode for real-time messages - transmits back to CSMA/CD when all real-time
session terminate. When real-time message request comes to a node, if the network is not in RETHER
mode – a switch to RETHER message is broadcast; and all nodes that receive message switches to the
RETHER mode and also acknowledge the sender. The currently transmitting node waits for the ongoing
transmission to end- then sends an acknowledgment. After receiving acknowledgment from all nodes, the
initiating node creates a token and circulates it.

RETHER uses a timed token scheme; at any time, only one real-time request is allowed per node. Each
real-time request needs to specify required transmission bandwidth based on the amount of data it needs
to send.

82 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
5.3. Real-time Communication Over the Internet
Support for enabling real-time communication (RTC) in the web is currently gaining momentum with the
two main Internet - standardization bodies: the IETF and W3C. Standardization activities in this area
aim to define a W3C API that enables a Web application running on any device through secure access to
input peripherals (such as webcams and microphones) to send and receive real-time media and data in a
peer-to-peer (P2P) fashion between browsers. The API‟s design must allow Web developers to
implement functionality for finding and connecting participants in a communication session. The W3C
API will rely on existing protocols the IETF community has identified as the most appropriate for
addressing network related aspects (control protocols, connection establishment and management,
connectionless transport, selection of the most suitable encoders and decoders, and so on).

However, no clean separation exists between the two standardization activities, which clearly intersect at
the interface between the application level responsibilities residing in a single node and the
intercommunication activities among remote nodes.

This migration toward browser-enabled RTC represents a major breakthrough and has motivated many
industries and academic researchers‟ recent work. Here, we discuss the growing interest in integrating
interactive multimedia features into Web applications.

RTCWeb has focused on the protocols and interactions that the IETF must address, including
interoperability with legacy systems (such as existing telecommunications systems).

WebRTC is working to define an API allowing browsers and scripting languages to interact with media
devices (microphones, webcams, and speakers), processing devices (encoders/ decoders) and
transmission functions. Such efforts will likely expand and enhance the HTML5 specification, which
already provides a standard way to stream multimedia content from servers to browsers.

Both working groups will have to consider any security issues that arise from the features that will be
addressed. We expect (and hope to see) several prototype implementations during these working groups‟
lifetime.

Note that: - HTML5 is generally used as an umbrella term for the advances taking place on the so-called
Open Web Platform, although HTML is itself just one part of the various features used for developing
Web applications and commonly referred to as part of that platform. The complete set of features also

83 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
includes Cascading Style Sheets (CSS), the Document Object Model (DOM) convention, JavaScript, and
several scripting APIs.

Figure 5.2 The RTCWeb architecture. This typical voice-over-IP communication trapezoid has a server-
mediated signaling path and a direct (browser-to browser) media path.

5.4. Internet of Things(IoT)


The Internet of Things (IoT) is the ability to have devices communicates with one another via the internet
or other networks, remotely tracking information to provide feedback to assist with decision making for
commercial, industrial and residential purposes. This is commonly done using sensors connecting to a
back-to-base system.

Some common day-to-day examples could be:

Temperatures in refrigeration or food heating units in the food and beverage industry.
Assistance with the control of temperature and humidity levels.
Detection of gas and dust levels.
Monitoring of water levels and herd locations for agricultural purposes.
Different applications in the automotive, aviation and nautical sectors such as the sensing of tire
pressures for trucking fleets.

84 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
The Scope of IoT is not limited to just connecting things (device, appliances, and machines) to the
Internet, but it allows these things to communicate and exchange data (control& information) on real
time. Processing on these data will provide us various applications towards a common user or machine
goal.

Generally, the following diagram explains about IoT.

Examples of High quality cost effective IoT components: Printed Circuit Boards (PCB) are a core
component of many IoT assemblies. CNS Precision Assembly leans on its vastly experienced and
qualified personnel to provide high quality products on time, every time.

5.5. Sensors and Actuators


Sensors and actuators are often found in the same areas of equipment and systems within an industrial
setting. Although they often interact, they are two different components. They frequently complement
each other and work together to ensure that various assets and systems are functioning effectively. They
both play important roles in condition-based maintenance.

A sensor monitors environmental conditions such as fluid levels, temperatures, vibrations, or voltage.
When these environmental conditions change, they send an electrical signal to the sensor, which can then
send the data or an alert back to a centralized computer system or adjust the functioning of a particular
piece of equipment. For example, if a motor reaches the temperature point of overheating, it can
automatically shut off.

85 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
An actuator, on the other hand, causes movement. It takes an electrical signal and combines it with an
energy source to create physical motion. An actuator may be pneumatic, hydraulic, electric, thermal, or
magnetic. For example, an electrical pulse may drive the functioning of a motor within an asset.

Main Differences between Sensors and Actuators:

 Inputs and Outputs: Sensors and actuators track different signals, operate through different
means, and must work together to complete a task. They are also physically located in different
areas and often used in separate applications. Sensors look at the inputs from the environment,
which trigger a particular action. On the other hand, actuators track outputs of systems and
machines.
 Electrical Signaling: Sensors work through electrical signaling to read the specified
environmental condition and perform the assigned task. However, actuators measure heat or
motion energy to determine the resulting action.
 Reliance: Sensors and actuators can actually rely on each other to perform a particular task. If
both are present, an actuator relies on a sensor to do its job. If one or both are failing to work
properly, the system will not be functional.
 Conversion Direction: A sensor tends to convert a physical attribute to an electrical signal. An
actuator does the opposite: it changes an electrical signal to physical action.
 Location: If both a sensor and actuator are present, the first is located at the input port, while the
latter resides at the output port.
 Application: Sensors are often used to measure asset temperature, vibration, pressure or fluid
levels. Industrial applications of actuators include operating dampers, valves, and couplings.
i. Different Types of Actuators:

1. Manual actuators: These actuators require an employee to control gears, levers or wheels.
Although they are inexpensive and simple to use, they have limited applicability.
2. Pneumatic actuators: These actuators use gas pressure to power valves. The pressure pushes a
piston to affect the valve stem.
3. Hydraulic actuators: These actuators use fluid to generate pressure. Instead of using gas pressure,
hydraulic actuators use fluid pressure to operate valves.
4. Electric actuators: Electric actuators employ an electric motor to operate a valve. Although these
actuators are quiet and efficient, they require batteries or electricity, which may not always be
available in particular locations.
86 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.
Hawassa University - IOT Faculty of Informatics Department of Computer Science
5. Spring actuators: These actuators hold spring back until a trigger occurs. Once a particular
threshold is reached, the spring releases and operates the valve. These are typically used in one-
time emergency applications.
ii. Different Types of Sensors:

1. Temperature sensors: These sensors are frequently used in the foodservice industry to prevent
spoilage. When equipment falls out of range, an alert can be sent to a computerized maintenance
management system (CMMS).
2. Vibration sensors: Vibration sensors help measure vibration levels on sensitive assets and are
often used on rotating machinery.
3. Security sensors: Security sensors can help protect both employees within a facility or track
expensive tools and equipment.
4. Pressure sensors: Pressure sensors can alter the performance of an asset when pressure is too high
or low or send an alert if pressure variations can indicate a potential failure.
5. Humidity sensors: Humidity sensors are often used to control tiny amounts of moisture that can
effective extremely sensitive electronics equipment.
6. Gas sensors: Gas sensors have multiple applications across many industries and can alert you
when gas levels are too high or low.
How Sensors and Actuators Work Together

Actuators and sensors often work together in maintenance applications. For example, let‟s look at a
typical furnace to illustrate.

A gas shutoff valve connects to a thermocouple in a gas furnace. When the pilot light is operating
properly, the thermocouple creates a current that keeps the valve open. However, if the pilot light goes
out, the current stops, which closes the valve. This prevents an accumulation of gas and reduces the
possibility of an explosion. In this application, the thermocouple is the sensor and generates the energy
and signal. Both are sent to the shutoff valve, which is the actuator in this system.

Many more complex systems may utilize multiple actuators and sensors to perform complicated tasks.
However, the basic relationship is the same: the two work together. Either the sensor sends the signal and
the actuator performs the action, or an actuator movement triggers a sensor to send an alert.

87 | L e c t u r e H a n d o u t: (R T E S) C h a p t e r 4 – Organized by A y e l e S.

You might also like