Download as pdf or txt
Download as pdf or txt
You are on page 1of 43

Unit – 1

1 What is an embedded system ? Explain the different applications of embedded systems?


Embedded systems are at the heart of many different products, machines and intelligent operations, such as machine
learning and artificial intelligence applications. As embedded systems applications appear in every industry and sector today,
embedded devices and software play a crucial role in the functioning of cars, home appliances, medical devices, interactive
kiosks, and other equipment we use in our daily lives.
Common embedded systems can be broken into four types based on performance as well as functional requirements:
• Real-time
• Stand-alone
• Networked
• Mobile
Real-Time
Real-time embedded systems are designed and installed to carry out specific tasks within a pre-defined time limit. They are
further divided into two different types:
• Soft Real-Time Embedded Systems: For these systems, the completion of the task is of paramount importance, while
the deadline is not a priority.
• Hard Real-Time Embedded Systems: These systems prioritize deadlines, so they shouldn’t be missed in any case.
Some of the real-time embedded systems examples are:

• Sound System of a computer (Soft real-time system)


• Aircraft control system (Hard real-time system)
Stand-alone
These are self-sufficient systems that do not rely on a host system like a processor or a computer to perform tasks. Here are
some standalone embedded technology examples:

• Microwave ovens
• Washing machines
• Video game consoles

Networked
These systems are connected to a wired or wireless network to perform assigned tasks and provide output to the connected
devices. They are comprised of components like controllers and sensors. Here are some network embedded software
examples:
• ATMs
• Home security systems
• Card swipe machines
Mobile
These systems are smaller in size and easy to use. Though they come with limited memory, people still prefer them due to
their portability and handiness. Here are a few mobile embedded control systems examples:
• Digital cameras
• Mobile phones
• Smart watch
• Fitness tracker
Embedded Systems Application
Here are some of the real-life examples of embedded system applications.
1. Central Heating Systems
Central heating systems convert the chemical energy into thermal energy in a furnace room and transfer that energy into
heat, which is then delivered to numerous spaces within a building. It is important for these systems to have thermostat
controls to adjust the temperature, which is achieved by an embedded system.
If a central heating system isn’t provided with temperature controls, it can lead to overheating one room while leaving
another room cold. The right thermostat controls will allow you to adjust the temperature to a comfortable level and save
energy extensively.
Embedded system examples in central heating can be found in a range of structures that require temperature control, both
for comfort and for management of temperature-sensitive goods.
Examples include:

• Office buildings
• Factories
• Grocery stores
• Homes
• Schools
• Hospitals
2. GPS Systems
The GPS is a navigation system that uses satellites and receivers to synchronize data related to location, time, and velocity.
The receiver or device that receives the data has an integrated embedded system to facilitate the application of a global
positioning system. The embedded GPS devices allow people to find their current locations and destinations easily. Thus,
they are gaining rapid momentum and becoming the most widely used navigation tools for automobiles.
Nowadays, GPS systems are generally used in:
• Cars
• Mobile devices
• Palmtop
3. Fitness Trackers
Fitness trackers are wearable devices that can monitor your health and track activities like sleeping, running, and walking.
These devices use embedded systems to garner data related to your heart rate, body temperature, and the number of
footsteps, which is further sent to servers via WAN like LTE or GPRS.
Fitness trackers are generally used for:
• Monitoring personal activity
• Medical monitoring
• Sports training
4. Medical Devices
Medical devices in healthcare facilities have been incorporating embedded systems for quite some time. A new class of
medical devices use embedded systems to help treat patients who need frequent monitoring and constant attention at
home. These systems are embedded with sensors to gather data related to patients’ health like heart rate, pulse rate, or
readings from implants, which are sent to a cloud where a doctor can review patient data on their device wirelessly. Medical
devices have been widely used for diagnosing and treating patients efficiently, and some of their examples are:

• Pacemaker
• Defibrillator
• Ultrasound scanners

5. Automotive Systems
Automotive embedded systems are designed and installed to enhance the safety of automobiles. Thanks to the safety
systems in vehicles, the traffic fatality rate has plummeted in recent years. Automobile industries are going the extra mile to
reinforce automobiles with advanced technology systems and sensors, which is not possible without embedded systems.

Some key examples of an active safety system include adaptive speed control, car breakdown warning, pedestrian
recognition, merging assistance, airbags, and more. These are a few features anticipated to mitigate the risk of accidents and
foster the demand for embedded systems across the globe.
Some more examples of automotive embedded systems include:
• Car navigation system
• Anti-lock braking system
• Vehicle entertainment system

6. Transit and Fare Collection


Automated Fare Collection (AFC) is a ticketing system that allows passengers to pay the fare through ticket vending machines
or online services. These systems were originated with coins and tokens but have been replaced with magnetic stripe cards or
smart cards. An AFC is a basic station device comprising a ticket vending machine, automatic gate machine, and ticket
checking machine. These components are embedded systems that ensure faster transactions, seamless operations, and more
efficient payment collection.
While city transit bus and commuter rails still use paper tickets and passes, urban transit systems have adopted AFC with
smart cards, which are inexpensive technologies and offer additional security along with data collection options.
Automated fare collection systems are generally found at:

• Metro stations
• Bus stations
• Railway stations

If you are looking for embedded processor examples in the transportation sector, see some of our customer stories, sharing
how Digi embedded System-on-Modules are designed into transit and vehicle applications:
• EMtest
• TransData
7. ATMs
An automated teller machine (ATM) is a computerized machine used in banking that communicates with a host bank
computer over a network. The bank computer verifies all the data entered by the users and stores all transactions, while the
embedded system in the ATM displays the transaction data and processes inputs from the ATM keyboard.
An ATM is mostly used to:

• Withdraw cash
• Check account balance and transactions details
• Deposit money into another account
8. Factory Robots
Factory robots are designed to perform high-precision tasks under dangerous work conditions. They have an integrated
embedded system to connect different subsystems. In a typical mechanical job, robots employ actuators, sensors, and
software to perceive the environment and derive intended output safely.
Without an embedded system, robots would have to rely on external control or computing systems. This, in turn, can elevate
the safety risks due to delay or failure in the connection link between the factory robot and its external computing system.
Today, as Industry 4.0 comes to fruition, these systems are integrating artificial intelligence and machine learning to make
equipment smarter, safer and more effective — for example, enabling machines to identify defects that the human eye
wouldn't see, and remove these from production.
Factory robots have a range of applications:

• Assembly line
• Quality monitoring
• Welding
• Painting
• Palletizing
9. Electric Vehicle Charging Stations
Electric vehicle charging stations are equipped with charging points or units that supply electric power to charge connected
vehicles. An embedded system resides in the charging station to provide processing for graphics displays, report any issues
with the device and alert technicians when maintenance is required. This embedded solution provides an easy and cost-
effective approach to monitoring and maintaining the charging infrastructure. A number of Digi customers, such
as AddÉnergie, are developing solutions to serve this growing market.
Some of the common uses of electric vehicle charging stations include:

• Charging vehicles
• Swapping batteries
• Parking vehicles
10. Interactive Kiosks
Self-service kiosks are designed to offer services and information to end-users in environments where human employee
presence isn’t possible or cost-effective. For instance, these machines and terminals allow a passenger in an empty airport to
buy a meal at 4 am without interacting with human workers. Interactive kiosks come in all shapes and sizes, from simple
coffee dispensing systems to complex vending machines and fuel stations with high-definition graphics. For this reason, it is
important for embedded developers to work with a scalable product line like Digi ConnectCore® 8X/8M system-on-modules
(SOMs), which support development of product lines with scaling levels of functionality.
An embedded system provides the processing for connected, self-service kiosk machines, offering an interactive consumer
experience. These systems can be developed to function in remote and outdoor environments and deliver information and
services even in extreme weather conditions. They can also eliminate downtime for real-time applications and have
expandable I/O options designed for workload consolidation.
Apart from airports, interactive kiosk machines are mostly found in:

• Retail sites and convenience stores


• Hospitals
• Movie theaters
• Government buildings

2 Explain the different characteristics of an embedded system in detail.

3 What are the different quality attributes to be considered in an embedded system design.
4 Discuss the operational quality attributes of an embedded system.
Same question in 3

5 Explain the non-operational quality attributes of an embedded system.


Same question in 3

6 Write a short notes on the history of embedded systems.


7 Elaborate the different application areas of an embedded system with examples.
Same question in 1

8 Differentiate an embedded system from a general computing system.

9 Explain the different classifications of embedded system. Give an example for each.
10 Explain the quality attributes maintainability and portability in the embedded system design context.

Same question in 4 q

UNIT -2

What are the components used as the core of an embedded system. Explain the merits, drawbacks and the applications
1
where they are commonly used.
Embedded systems are specialized computing systems designed to perform dedicated functions or
tasks within larger systems. They are commonly found in various applications, ranging from consumer
electronics and automotive systems to industrial control and medical devices. The core components of
an embedded system include:
1. Microcontroller/Microprocessor:
• Merits:
• Cost-Effective: Microcontrollers are often more cost-effective than general-
purpose processors, making them suitable for mass-produced embedded
systems.
• Low Power Consumption: Many microcontrollers are designed for low power
consumption, making them suitable for battery-operated devices.
• Drawbacks:
• Limited Processing Power: Microcontrollers may have limited processing power
compared to more powerful processors, restricting the complexity of tasks they
can handle.
• Applications:
• Consumer electronics (e.g., washing machines, microwave ovens).
• Automotive systems (e.g., engine control units).
2. Memory:
• Merits:
• Fast Access: Memory is crucial for storing and retrieving data quickly.
• Non-Volatile Options: Some embedded systems use non-volatile memory for data
storage, ensuring data retention even when power is removed.
• Drawbacks:
• Limited Capacity: Embedded systems may have limited memory capacity
compared to general-purpose computers.
• Applications:
• Industrial control systems.
• Smartphones and other portable devices.
3. Input/Output (I/O) Interfaces:
• Merits:
• Connectivity: Facilitates communication with external devices and sensors.
• Versatility: Various I/O interfaces support diverse applications.
• Drawbacks:
• Limited Ports: Some embedded systems may have a limited number of I/O ports.
• Applications:
• Home automation systems.
• Industrial automation.
4. Real-Time Clock (RTC):
• Merits:
• Accurate Timekeeping: Important for applications requiring time-dependent
operations.
• Power Efficiency: Allows the system to enter low-power states and wake up at
specific times.
• Drawbacks:
• Cost: May add to the overall cost of the system.
• Applications:
• Smart energy meters.
• Medical devices.
5. Power Supply:
• Merits:
• Efficiency: Efficient power management is crucial for battery-operated devices.
• Voltage Regulation: Ensures stable and reliable operation.
• Drawbacks:
• Complexity: Designing efficient power supplies can be complex.
• Applications:
• Portable devices.
• Internet of Things (IoT) devices.
6. Communication Interfaces:
• Merits:
• Connectivity: Enables communication with other devices or networks.
• Scalability: Supports various communication protocols.
• Drawbacks:
• Complexity: Implementing multiple communication interfaces can add complexity
to the system.
• Applications:
• Networking equipment.
• Automotive communication systems.
7. Sensors and Actuators:
• Merits:
• Data Input: Sensors gather data for the system.
• Control: Actuators enable the system to interact with the physical world.
• Drawbacks:
• Cost: High-quality sensors and actuators can be expensive.
• Applications:
• Embedded systems in robotics.
• Environmental monitoring systems.
In summary, the components of an embedded system are chosen based on the specific requirements
of the application. While they offer advantages such as cost-effectiveness and power efficiency, they
may also have limitations, particularly in terms of processing power and memory capacity. The key is to
carefully match the components with the intended purpose of the embedded system.
2 Give the differences between microprocessor and microcontroller.

3 Discuss the role of ASICs and PLDs in embedded system design.


4 Write a short notes on COTS.
5 Differentiate RISC vs CISC processors or controllers.
6 What are the different types of memories used in embedded system design. Explain the role of each.
Memory is a crucial component in embedded systems, serving to store and retrieve data and
instructions for the system's operation. Different types of memories are used in embedded system
design, each with its own characteristics and roles. Here are some common types of memories in
embedded systems:
1. Program Memory (ROM - Read-Only Memory):
• Role: Stores the firmware or permanent software of the embedded system that is not
expected to change during normal operation.
• Types:
• Mask ROM: Manufactured with the code permanently embedded during
production.
• EPROM (Erasable Programmable ROM): Can be programmed and erased using
ultraviolet light.
• EEPROM (Electrically Erasable Programmable ROM): Can be electrically
programmed and erased, allowing for in-circuit updates.
2. Data Memory (RAM - Random Access Memory):
• Role: Temporarily stores data and variables during the execution of the program.
• Types:
• SRAM (Static RAM): Faster and more power-hungry, retains data as long as power
is supplied.
• DRAM (Dynamic RAM): Slower but more power-efficient, requires periodic
refreshing to maintain data integrity.
3. Flash Memory:
• Role: Non-volatile memory used for storing the program code, configuration settings,
and data that need to be retained even when power is turned off.
• Types:
• NOR Flash: Suitable for code storage and random access, slower write times.
• NAND Flash: Used for high-density data storage, faster write times but with
limitations on random access.
4. Electrically Erasable Programmable Read-Only Memory (EEPROM):
• Role: Non-volatile memory for storing small amounts of data that may need to be
updated during the system's operation.
• Usage: Configuration parameters, calibration data, and other dynamic settings.
5. Memory Caches:
• Role: Used to temporarily store frequently accessed data to improve the speed of data
retrieval.
• Types:
• Instruction Cache: Stores frequently used program instructions.
• Data Cache: Stores frequently accessed data.
6. External Storage (SD Cards, Hard Drives):
• Role: Large-capacity storage used for storing data, files, and applications that exceed the
capacity of on-chip memories.
• Usage: Multimedia storage, data logging, and applications with extensive data
requirements.
7. Registers:
• Role: The smallest and fastest type of memory, residing within the CPU, used to store
intermediate data and control information during program execution.
• Usage: Holding operands for arithmetic operations, storing status flags, and managing
control flow.
Each type of memory in an embedded system plays a specific role in the overall functionality of the
system. The selection of memory types depends on factors such as speed requirements, power
consumption, volatility, and the specific needs of the application. The combination of these memory
types contributes to the efficient and reliable operation of embedded systems in various domains.
7 What is a sensor. Illustrate the importance of sensors in embedded systems with an example.
A sensor is a device or instrument that detects and measures physical properties, environmental
conditions, or changes in its surroundings and converts this information into electrical signals or other
usable output. Sensors are crucial components in embedded systems as they enable these systems to
interact with the physical world by providing input about the environment or the system itself. The data
collected by sensors can be used for monitoring, control, and decision-making processes within the
embedded system.
Importance of Sensors in Embedded Systems:
1. Data Acquisition:
• Sensors capture data from the physical world, providing information about temperature,
pressure, light, motion, and more.
• This data acquisition is essential for embedded systems to understand and respond to
changes in their environment.
2. Monitoring and Control:
• Sensors enable real-time monitoring of various parameters, allowing embedded systems
to adjust their behavior or initiate specific actions based on the measured data.
• For example, in an industrial setting, temperature sensors can monitor the operating
conditions of machinery and trigger cooling systems to prevent overheating.
3. Feedback Mechanism:
• Sensors provide a feedback loop, allowing embedded systems to adapt and optimize
their performance.
• In an automotive anti-lock braking system (ABS), wheel speed sensors provide real-time
feedback to the system, allowing it to modulate brake pressure and prevent wheel
lockup during sudden braking.
4. Automation:
• Sensors play a key role in automation by detecting events or conditions and triggering
automated responses.
• In smart homes, occupancy sensors can be used to automatically control lighting,
heating, or air conditioning based on the presence or absence of individuals in a room.
5. User Interaction:
• Sensors facilitate interaction between users and embedded systems by capturing input
from the environment or the user.
• Touch sensors in smartphones, for example, enable users to interact with the device by
detecting touches and gestures.
6. Environmental Monitoring:
• Sensors are critical for monitoring environmental conditions in applications such as
weather stations, pollution monitoring, and agricultural systems.
• In precision agriculture, soil moisture sensors help optimize irrigation by providing real-
time data on soil moisture levels.
Example: Temperature Sensor in a Climate Control System
Consider an embedded system used in a building's climate control system. A temperature sensor is
integrated into the system to continuously monitor the ambient temperature. The importance of the
sensor in this context includes:
• Temperature Regulation: The temperature sensor provides real-time data to the embedded
system, allowing it to regulate heating or cooling systems to maintain a desired temperature.
• Energy Efficiency: By accurately measuring the temperature, the system can optimize the
operation of HVAC (Heating, Ventilation, and Air Conditioning) systems, ensuring energy-
efficient usage.
• Comfort Control: The system can respond to changes in temperature quickly, providing
occupants with a comfortable and consistent environment.
• Fault Detection: Abnormal temperature readings may indicate potential issues with the HVAC
system, enabling the embedded system to trigger alarms or maintenance alerts.
In this example, the temperature sensor is a fundamental component that enhances the efficiency,
comfort, and reliability of the embedded system, showcasing the importance of sensors in various
applications.
8 Explain the different on-board communication interfaces in an embedded system design.
9 Explain the different external communication interfaces in detail.
10 Discuss I2C and SPI communication interfaces.
U can find answer in 8 question.

Unit – 3

1 Discuss the different embedded firmware design approaches.

2 Explain the super loop based embedded firmware approach.


U can find answer in 1st question

3 Write a short notes on the embedded operating system based embedded firmware approach.
Hey U can find answer in 1st question

4 With a sample code explain the assembly language based embedded firmware development.
5 Elaborate on the advantages of assembly language based development.
Assembly language is a low-level programming language that is specific to a particular computer
architecture. It is a symbolic representation of machine code instructions, making it more human-
readable than raw machine code. While high-level programming languages are often preferred for
their abstraction and ease of use, there are several advantages to using assembly language for
development in certain scenarios:

1. Direct Hardware Interaction:


• Assembly language allows programmers to interact directly with the hardware, providing
precise control over the processor and other components.
• This level of control is crucial in embedded systems, device drivers, and real-time
systems where efficient and precise hardware interaction is necessary.
2. Efficient Code Execution:
• Assembly code can be highly optimized for a specific architecture, resulting in code that
executes with minimal overhead.
• Programmers can fine-tune algorithms and data structures to maximize performance,
making assembly language suitable for applications with stringent performance
requirements.
3. Small Code Size:
• Assembly language programs tend to have smaller code sizes compared to equivalent
programs written in high-level languages.
• This is advantageous in situations where memory space is limited, such as in embedded
systems or environments with tight resource constraints.
4. Real-Time Systems:
• Assembly language is often used in real-time systems where predictable and
deterministic execution times are crucial.
• The absence of high-level language abstractions and runtime overhead allows for precise
control over timing and response requirements.
5. Low-Level Device Programming:
• Assembly language is commonly employed when programming low-level devices, such
as microcontrollers or specialized hardware components.
• It allows developers to directly manipulate registers and control the behavior of
hardware, which is essential in systems programming and device drivers.
6. Learning about Computer Architecture:
• Working with assembly language provides a deep understanding of the underlying
computer architecture and how instructions are executed.
• It is a valuable learning tool for computer science students and programmers who want
to grasp the intricacies of hardware-level programming.
7. Portability Across High-Level Languages:
• Assembly language code can be integrated into programs written in high-level
languages, allowing for performance-critical sections to be implemented in assembly
while the rest of the code is written in a higher-level language.
• This enables a balance between the expressiveness of high-level languages and the
efficiency of low-level programming.
8. Intricate Control Flow:
• Assembly language allows for fine-grained control over program flow, making it suitable
for implementing complex algorithms where precise control is required.
• It is often used in cryptographic algorithms, signal processing, and other scenarios with
intricate control requirements.
6 Explain high level language based embedded firmware development.
7 Discuss the advantages of high level language based development.
High-level programming languages provide a level of abstraction that simplifies the development
process by allowing programmers to write code that is closer to natural language. Here are several
advantages of high-level language-based development:
1. Abstraction and Simplicity:
• High-level languages abstract away low-level details, making it easier for programmers
to focus on solving problems without dealing with intricate hardware-specific
instructions.
• This abstraction simplifies code, making it more readable and understandable.
2. Productivity and Faster Development:
• High-level languages offer built-in functions, libraries, and abstractions that facilitate
rapid development.
• Programmers can achieve more functionality with fewer lines of code, leading to faster
development cycles and quicker time-to-market for software products.
3. Portability:
• Code written in high-level languages is generally more portable across different
platforms and architectures.
• Programmers can write code once and run it on multiple platforms with minimal
modifications, reducing the effort required for cross-platform compatibility.
4. Maintainability and Readability:
• High-level languages promote cleaner and more readable code, making it easier for
developers to understand and maintain.
• Features like modular programming, object-oriented programming, and abstraction
contribute to code maintainability over the long term.
5. Community and Ecosystem:
• High-level languages often have large and active communities, leading to extensive
documentation, support forums, and third-party libraries.
• This ecosystem enhances collaboration, knowledge sharing, and the availability of
resources for developers.
6. Automatic Memory Management:
• Many high-level languages incorporate automatic memory management (garbage
collection), reducing the risk of memory leaks and simplifying memory-related issues.
• This feature alleviates the burden on developers to manually allocate and deallocate
memory, making programming less error-prone.
7. Cross-disciplinary Development:
• High-level languages are designed to be accessible to a broad audience, allowing
developers with various backgrounds to contribute to a project.
• This inclusivity encourages collaboration between developers with different skill sets and
expertise.
8. Rich Standard Libraries:
• High-level languages come with extensive standard libraries that provide pre-built
functions and modules, saving developers from having to implement common tasks
from scratch.
• This accelerates development by leveraging existing functionality and reducing the need
for repetitive coding.
9. Platform Independence:
• High-level languages abstract away platform-specific details, enabling developers to
write code without being overly concerned about the underlying hardware or operating
system.
• This platform independence is particularly beneficial for applications intended to run on
multiple platforms.
10. Rapid Prototyping:
• High-level languages are well-suited for rapid prototyping and iterative development.
• The ease of expressing ideas in code allows developers to quickly test and refine
concepts without getting bogged down in low-level implementation details.
11. Enhanced Security Features:
• Some high-level languages incorporate security features, such as bounds checking and
memory safety, which can help prevent common programming errors that lead to
vulnerabilities.
8 Explain with an example mixing of assembly with high level language.
9 With an example, explain the mixing of high level language with assembly language.
10 Explain the embedded firmware development achieved by mixing assembly language and high level
language.
8question + 9 question = 10 answer

Unit – 4

1 What is a kernel. Explain the different kernel services.

In computer science, Kernel is a computer program that is a core or heart of an operating


system. Before discussing kernel in detail, let's first understand its basic, i.e., Operating system in a
computer.

Operating System
An operating system or OS is system software that works as an interface between hardware
components and end-user. It enables other programs to run. Each computer system, whether it is
desktop, laptop, tablet, or smartphone, all must have an OS to provide basic functionalities for the
device. Some widely used operating systems are Windows, Linux, MacOS, Android, iOS, etc.

What is Kernel in Operating System?

o As discussed above, Kernel is the core part of an OS(Operating system); hence it has full
control over everything in the system. Each operation of hardware and software is managed
and administrated by the kernel.
o It acts as a bridge between applications and data processing done at the hardware level. It is
the central component of an OS.
o It is the part of the OS that always resides in computer memory and enables the
communication between software and hardware components.
o It is the computer program that first loaded on start-up the system (After the bootloader).
Once it is loaded, it manages the remaining start-ups. It also manages memory, peripheral,
and I/O requests from software. Moreover, it translates all I/O requests into data processing
instructions for the CPU. It manages other tasks also such as memory management, task
management, and disk management.
o A kernel is kept and usually loaded into separate memory space, known as protected Kernel
space. It is protected from being accessed by application programs or less important parts
of OS.
o Other application programs such as browser, word processor, audio & video player use
separate memory space known as user-space.

Functions of a Kernel
A kernel of an OS is responsible for performing various functions and has control over the system.
Some main responsibilities of Kernel are given below:

o Device Management
To perform various actions, processes require access to peripheral devices such as a mouse,
keyboard, etc., that are connected to the computer. A kernel is responsible for controlling these
devices using device drivers. Here, a device driver is a computer program that helps or enables the
OS to communicate with any hardware device.
A kernel maintains a list of all the available devices, and this list may be already known, configured
by the user, or detected by OS at runtime.
o Memory Management
The kernel has full control for accessing the computer's memory. Each process requires some
memory to work, and the kernel enables the processes to safely access the memory. To allocate the
memory, the first step is known as virtual addressing, which is done by paging or
segmentation. Virtual addressing is a process of providing virtual address spaces to the processes.
This prevents the application from crashing into each other.
o Resource Management
One of the important functionalities of Kernel is to share the resources between various processes. It
must share the resources in a way that each process uniformly accesses the resource.
The kernel also provides a way for synchronization and inter-process communication (IPC). It is
responsible for context switching between processes.
o Accessing Computer Resources
A kernel is responsible for accessing computer resources such as RAM and I/O devices. RAM or
Random-Access Memory is used to contain both data and instructions. Each program needs to
access the memory to execute and mostly wants more memory than the available. For such a case,
Kernel plays its role and decides which memory each process will use and what to do if the required
memory is not available.
The kernel also allocates the request from applications to use I/O devices such as keyboards,
microphones, printers, etc.

2 Discuss the different types of operating systems in detail.


An Operating System performs all the basic tasks like managing files, processes, and memory. Thus operating
system acts as the manager of all the resources, i.e. resource manager. Thus, the operating system becomes an
interface between the user and the machine. It is one of the most required software that is present in the device.
Operating System is a type of software that works as an interface between the system program and the hardware.
There are several types of Operating Systems in which many of which are mentioned below. Let’s have a look at
them.

Types of Operating Systems


There are several types of Operating Systems which are mentioned below

1. Batch Operating System


This type of operating system does not interact with the computer directly. There is an operator which takes similar
jobs having the same requirement and groups them into batches. It is the responsibility of the operator to sort jobs
with similar needs.

Advantages of Batch Operating System


• It is very difficult to guess or know the time
required for any job to complete. Processors of the
batch systems know how long the job would be
when it is in the queue.
• Multiple users can share the batch systems.
• The idle time for the batch system is very less.
• It is easy to manage large work repeatedly in batch
systems.
Disadvantages of Batch Operating System
• The computer operators should be well known with batch systems.
• Batch systems are hard to debug.
• It is sometimes costly.
• The other jobs will have to wait for an unknown time if any job fails.
Examples of Batch Operating Systems: Payroll Systems, Bank Statements, etc.
2. Multi-Programming Operating System
Multiprogramming Operating Systems can be simply illustrated as more than one program is present in the main
memory and any one of them can be kept in execution. This is basically used for better execution of resources.

Advantages of Multi-Programming Operating System


• Multi Programming increases the Throughput of
the System.
• It helps in reducing the response time.

Disadvantages of Multi-Programming Operating System
• There is not any facility for user interaction of
system resources with the system.

3. Multi-Processing Operating System

Multi-Processing Operating System is a type of Operating System in which more than one CPU is used for the
execution of resources. It betters the throughput of the System.

Advantages of Multi-Processing Operating System


• It increases the throughput of the system.
• As it has several processors, so, if one processor fails, we can proceed with another processor.
Disadvantages of Multi-Processing Operating System
• Due to the multiple CPU, it can be more complex and somehow difficult to understand.

4. Multi-Tasking Operating System

Multitasking Operating System is simply a multiprogramming Operating System with having facility of a Round -
Robin Scheduling Algorithm. It can run multiple programs
simultaneously.

There are two types of Multi-Tasking Systems which are listed


below.

• Preemptive Multi-Tasking
• Cooperative Multi-Tasking

Advantages of Multi-Tasking Operating System
• Multiple Programs can be executed simultaneously in Multi-Tasking Operating System.
• It comes with proper memory management.
Disadvantages of Multi-Tasking Operating System
• The system gets heated in case of heavy programs multiple times.
5. Time-Sharing Operating Systems
Each task is given some time to execute so that all the tasks work smoothly. Each user gets the time of the CPU as
they use a single system. These systems are also known as Multitasking Systems. The task can be from a single
user or different users also. The time that each task gets to execute is called quantum. After this time interval is
over OS switches over to the next task.

Time-Sharing OS

Advantages of Time-Sharing OS
• Each task gets an equal opportunity.
• CPU idle time can be reduced.
• Time-sharing systems allow multiple users to share hardware resources such as the CPU, memory, and
peripherals, reducing the cost of hardware and increasing efficiency.
• Improved User Experience: Time-sharing provides an interactive environment that allows users to
communicate with the computer in real time, providing a better user experience than batch processing.
Disadvantages of Time-Sharing OS
• Reliability problem.
• Data communication problem.
• Complexity: Time-sharing systems are complex and require advanced software to manage multiple
users simultaneously. This complexity increases the chance of bugs and errors.
• Security Risks: With multiple users sharing resources, the risk of security breaches increases. Time -
sharing systems require careful management of user access, authentication, and authorization to ensure
the security of data and software.
Examples of Time-Sharing OS with explanation
• IBM VM/CMS:
• TSO (Time Sharing Option):
• Windows Terminal Services:
6. Distributed Operating System
These types of operating system is a recent advancement in the world of computer technology and are being widely
accepted all over the world and, that too, at a great pace. Various autonomous interconnected computers
communicate with each other using a shared communication network. Independent systems possess their own
memory unit and CPU. These are referred to
as loosely coupled systems or distributed
systems. These systems’ processors differ in
size and function. The major benefit of working
with these types of the operating system is that
it is always possible that one user can access
the files or software which are not actually
present on his system but some other system
connected within this network i.e., remote access is enabled within the devices connected in that network.

Advantages of Distributed Operating System


• Failure of one will not affect the other network communication, as all systems are independent of each
other.
• Since resources are being shared, computation is highly fast and durable.
• Delay in data processing reduces.
Disadvantages of Distributed Operating System
• Failure of the main network will stop the entire communication.
• To establish distributed systems the language is used not well-defined yet.
Examples of Distributed Operating Systems are LOCUS, etc.
7. Network Operating System
These systems run on a server and provide the capability to manage data, users, groups, security, applications, and
other networking functions. These types of operating systems allow shared access to files, printers, security,
applications, and other networking functions over a small private network. One more important aspect of Network
Operating Systems is that all the users are well aware of the underlying configuration, of all other users within the
network, their individual connections, etc. and that’s why these computers are popularly known as tightly coupled
systems.

Advantages of Network Operating System


• Highly stable centralized servers.
• Security concerns are handled through servers.
• New technologies and hardware up-gradation are easily
integrated into the system.
• Server access is possible remotely from different locations and
types of systems.
Disadvantages of Network Operating System
• Servers are costly.
• User has to depend on a central location for most operations.
• Maintenance and updates are required regularly.
Examples of Network Operating Systems are Microsoft Windows Server
2003, Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, BSD, etc.
8. Real-Time Operating System
These types of OSs serve real-time systems. The time interval required to process and respond to inputs is very
small. This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict like missile systems, air traffic
control systems, robots, etc.
Types of Real-Time Operating Systems
• Hard Real-Time Systems:
Hard Real-Time OSs are meant for applications where time
constraints are very strict and even the shortest possible delay is not
acceptable. These systems are built for saving life like automatic
parachutes or airbags which are required to be readily available in
case of an accident. Virtual memory is rarely found in these systems.
• Soft Real-Time Systems:
These OSs are for applications where time-constraint is less strict.

Advantages of RTOS
• Maximum Consumption:
• Task Shifting: The time assigned for shifting tasks in these systems is very less. For example, in older
systems, it takes about 10 microseconds in shifting from one task to another, and in the latest systems,
it takes 3 microseconds.
• Error Free: These types of systems are error-free.
• Memory Allocation: Memory allocation is best managed in these types of systems.
Disadvantages of RTOS
• Limited Tasks: Very few tasks run at the same time and their concentration is very less on a few
applications to avoid errors.
• Use heavy system resources: Sometimes the system resources are not so good and they are
expensive as well.
• Complex Algorithms: The algorithms are very complex and difficult for the designer to write on.
Examples of Real-Time Operating Systems are Scientific experiments, medical imaging systems, industrial control
systems, weapon systems, robots, air traffic control systems, etc.
3 What is a process. With a neat sketch explain the various states associated with a process.
A process has several stages that it passes through from beginning to end. There must be a minimum of five states.
Even though during execution, the process could be in one of these states, the names of the states are not
standardized. Each process goes through several stages throughout its life cycle.

Process States in Operating System

The states of a process are as follows:


• New (Create): In this step, the process is about to be created but not yet created. It is the program
that is present in secondary memory that will be picked up by OS to create the process.
• Ready: New -> Ready to run. After the creation of a process, the process enters the ready state i.e. the
process is loaded into the main memory. The process here is ready to run and is waiting to get the CPU
time for its execution. Processes that are ready for execution by the CPU are maintained in a queue
called ready queue for ready processes.
• Run: The process is chosen from the ready queue by the CPU for execution and the instructions within
the process are executed by any one of the available CPU cores.
• Blocked or Wait: Whenever the process requests access to I/O or needs input from the user or needs
access to a critical region(the lock for which is already acquired) it enters the blocked or waits for the
state. The process continues to wait in the main memory and does not require CPU. Once the I/O
operation is completed the process goes to the ready state.
• Terminated or Completed: Process is killed as well as PCB is deleted. The resources allocated to the
process will be released or deallocated.
• Suspend Ready: Process that was initially in the ready state but was swapped out of main
memory(refer to Virtual Memory topic) and placed onto external storage by the scheduler is said to be
in suspend ready state. The process will transition back to a ready state whenev er the process is again
brought onto the main memory.
• Suspend wait or suspend blocked: Similar to suspend ready but uses the process which was
performing I/O operation and lack of main memory caused them to move to secondary memory. When
work is finished it may go to suspend ready.
4 Explain threads in detail from operating system context.
A thread is a single sequential flow of execution of tasks of a process so it is also known as thread of execution or thread
of control. There is a way of thread execution inside the process of any operating system. Apart from this, there can be
more than one thread inside a process. Each thread of the same process makes use of a separate program counter and
a stack of activation records and control blocks. Thread is often referred to as a lightweight process.
Types of Threads
In the operating system, there are two types of threads.
1. Kernel level thread.
2. User-level thread.
User-level thread
The operating system does not recognize the user-level thread. User threads can be easily implemented and it is
implemented by the user. If a user performs a user-level thread blocking operation, the whole process is blocked. The
kernel level thread does not know nothing about the user level thread. The kernel-level thread manages user-level
threads as if they are single-threaded
processes?examples: Java thread, POSIX threads, etc.

Advantages of User-level threads


1. The user threads can be easily implemented than the kernel
thread.

2. User-level threads can be applied to such types of operating systems that do not support threads at the kernel-
level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini thread control blocks are stored
in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the process.
Disadvantages of User-level threads
1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.
Kernel level thread
The kernel thread recognizes the operating system. There is a thread control block and process control block in the
system for each thread and process in the kernel-level thread. The kernel-level thread is implemented by the operating
system. The kernel knows about all the threads and manages them. The kernel-level thread offers a system call to create
and manage the threads from user-space. The implementation of kernel threads is more difficult than
the user thread. Context switch time is longer in the kernel thread. If a kernel thread
performs a blocking operation, the Banky thread execution can continue. Example: Window
Solaris.

Advantages of Kernel-level threads


1. The kernel-level thread is fully aware of all threads.
2. The scheduler may decide to spend more CPU time in the process of threads
being large numerical.
3. The kernel-level thread is good for those applications that block the frequency.
Disadvantages of Kernel-level threads
1. The kernel thread manages and schedules all threads.
2. The implementation of kernel threads is difficult than the user thread.
3. The kernel-level thread is slower than user-level threads.
Benefits of Threads
o Enhanced throughput of the system: When the process is split into many threads, and each thread is treated
as a job, the number of jobs done in the unit time increases. That is why the throughput of the system also
increases.
o Effective Utilization of Multiprocessor system: When you have more than one thread in one process, you can
schedule more than one thread in more than one processor.
o Faster context switch: The context switching period between threads is less than the process context switching.
The process context switch means more overhead for the CPU.
o Responsiveness: When the process is split into several threads, and when a thread completes its execution, that
process can be responded to as soon as possible.
o Communication: Multiple-thread communication is simple because the threads share the same address space,
while in process, we adopt just a few exclusive communication strategies for communication between two
processes.
o Resource sharing: Resources can be shared between all threads within a process, such as code, data, and files.
Note: The stack and register cannot be shared between threads. There is a stack and register for each thread.
5 Give the differences between thread and process.

S.NO Process Thread

Process means any program is in


Thread means a segment of a process.
1. execution.
S.NO Process Thread

The process takes more time to


The thread takes less time to terminate.
2. terminate.

3. It takes more time for creation. It takes less time for creation.

It also takes more time for context


It takes less time for context switching.
4. switching.

The process is less efficient in terms


Thread is more efficient in terms of communication.
5. of communication.

Multiprogramming holds the We don’t need multi programs in action for multiple threads
6. concepts of multi-process. because a single process consists of multiple threads.

7. The process is isolated. Threads share memory.

The process is called the A Thread is lightweight as each thread in a process shares
8. heavyweight process. code, data, and resources.

Process switching uses an interface Thread switching does not require calling an operating system
9. in an operating system. and causes an interrupt to the kernel.

If one process is blocked then it will


If a user-level thread is blocked, then all other user-level
not affect the execution of other
threads are blocked.
10. processes

The process has its own Process


Thread has Parents’ PCB, its own Thread Control Block, and
Control Block, Stack, and Address
Stack and common Address space.
11. Space.

Since all threads of the same process share address space and
Changes to the parent process do
other resources so any changes to the main thread may affect
not affect child processes.
12. the behavior of the other threads of the process.

13. A system call is involved in it. No system call is involved, it is created using APIs.

The process does not share data


Threads share data with each other.
14. with each other.

6 Write a short notes on different non-preemptive scheduling techniques.

7 Explain any two preemptive task scheduling techniques.

8 Explain Round Robin task scheduling technique.

9 With an example explain priority based preemptive task scheduling technique.

10 Explain any two non-preemptive task scheduling techniques.


6 -> FCFS technique , SJF technique
7 -> priority preemptive tech, SRTF technique
9-> ans in 7 question
10 -> ans in 6 question
Unit – 5
1 Explain shared memory and message passing IPC techniques.

2 Discuss Remote procedure call and sockets.


Remote Procedure Call (RPC) is a powerful technique for constructing distributed, client-server based
applications. It is based on extending the conventional local procedure calling so that the called procedure need
not exist in the same address space as the calling
procedure. The two processes may be on the same system,
or they may be on different systems with a network
connecting them.
When making a Remote Procedure Call:

1. The calling environment is suspended, procedure


parameters are transferred across the network to the
environment where the procedure is to execute, and the
procedure is executed there.
2. When the procedure finishes and produces its results, its
results are transferred back to the calling environment,
where execution resumes as if returning from a regular
procedure call.
Working of RPC

The following steps take place during a RPC :


1. A client invokes a client stub procedure, passing parameters in the usual way. The client stub resides
within the client’s own address space.
2. The client stub marshalls(pack) the
parameters into a message. Marshalling
includes converting the representation of
the parameters into a standard format,
and copying each parameter into the
message.
3. The client stub passes the message to
the transport layer, which sends it to the
remote server machine.
4. On the server, the transport layer passes
the message to a server stub,
which demarshalls(unpack) the
parameters and calls the desired server
routine using the regular procedure call
mechanism.
5. When the server procedure completes, it
returns to the server stub (e.g., via a
normal procedure call return), which
marshalls the return values into a
message. The server stub then hands the
message to the transport layer.
6. The transport layer sends the result
message back to the client transport
layer, which hands the message back to the client stub.
7. The client stub demarshalls the return parameters and execution returns to the caller.
SOCKET :
A socket is one endpoint of a two way communication link between two programs running on the network. The
socket mechanism provides a means of inter-process communication (IPC) by establishing named contact points
between which the communication take place.
Like ‘Pipe’ is used to create pipes and sockets is created using ‘socket’ system call. The socket provides
bidirectional FIFO Communication facility over the network. A socket connecting to the network is created at each
end of the communication. Each socket has a specific address. This address is composed of an IP address and a
port number.
Socket are generally employed in client server
applications. The server creates a socket, attaches it to a
network port addresses then waits for the client to contact
it. The client creates a socket and then attempts to connect
to the server socket. When the connection is established,
transfer of data takes place.

Types of Sockets : There are two types of Sockets: the datagram socket and the stream socket.
1. Datagram Socket : This is a type of network which has connection less
point for sending and receiving packets. It is similar to mailbox. The letters
(data) posted into the box are collected and delivered (transmitted) to a
letterbox (receiving socket).
2. Stream Socket In Computer operating system, a stream socket is type
of interprocess communications socket or network socket which provides a
connection-oriented, sequenced, and unique flow of data without record
boundaries with well defined mechanisms for creating and destroying
connections and for detecting errors. It is similar to phone. A connection is
established between the phones (two ends) and a conversation (transfer
of data) takes place.

3 Explain a dead lock in task synchronization and different deadlock handling techniques.
Deadlock is a situation where a process or a set of processes is blocked, waiting for some other resource that is
held by some other waiting process. It is an undesirable state of the system. The following are the four conditions
that must hold simultaneously for a deadlock to occur.
1. Mutual Exclusion – A resource can be used by only one process at a time. If another process requests
for that resource then the requesting process must be delayed until the resource has been released.
2. Hold and wait – Some processes must be holding some resources in the non-shareable mode and at
the same time must be waiting to acquire some more resources, which are currently held by other
processes in the non-shareable mode.
3. No pre-emption – Resources granted to a process can be released back to the system only as a result
of voluntary action of that process after the process has completed its task.
4. Circular wait – Deadlocked processes are involved in a circular chain such that each process holds one
or more resources being requested by the next process in the chain.
Methods of handling deadlocks: There are four approaches to dealing with deadlocks.
1. Deadlock Prevention
2. Deadlock avoidance (Banker's Algorithm)
3. Deadlock detection & recovery
4. Deadlock Ignorance (Ostrich Method)
These are explained below.

1. Deadlock Prevention: The strategy of deadlock prevention is to design the system in such a way that the
possibility of deadlock is excluded. The indirect methods prevent the occurrence of one of three necessary
conditions of deadlock i.e., mutual exclusion, no pre-emption, and hold and wait. The direct method prevents
the occurrence of circular wait. Prevention techniques – Mutual exclusion – are supported by the OS. Hold
and Wait – the condition can be prevented by requiring that a process requests all its required resources at
one time and blocking the process until all of its requests can be granted at the same time simultaneously. But
this prevention does not yield good results because:
• long waiting time required
• inefficient use of allocated resource
• A process may not know all the required resources in advance
No pre-emption – techniques for ‘no pre-emption are’
• If a process that is holding some resource, requests another resource that can not be immediately
allocated to it, all resources currently being held are released and if necessary, request again together
with the additional resource.
• If a process requests a resource that is currently held by another process, the OS may pre -empt the
second process and require it to release its resources. This works only if both processes do not have
the same priority.
Circular wait One way to ensure that this condition never holds is to impose a total ordering of all resource types
and to require that each process requests resources in increasing order of enumeration, i.e., if a process has been
allocated resources of type R, then it may subsequently request only those resources of types following R in
ordering.

2. Deadlock Avoidance: The deadlock avoidance Algorithm works by proactively looking for potential deadlock
situations before they occur. It does this by tracking the resource usage of each process and identifying conflicts
that could potentially lead to a deadlock. If a potential deadlock is identified, the algorithm will take steps to
resolve the conflict, such as rolling back one of the processes or pre-emptively allocating resources to other
processes. The Deadlock Avoidance Algorithm is designed to minimize the chances of a deadlock occurring,
although it cannot guarantee that a deadlock will never occur. This approach allows the three necessary
conditions of deadlock but makes judicious choices to assure that the deadlock point is never reached. It allows
more concurrency than avoidance detection A decision is made dynamically whether the current resource
allocation request will, if granted, potentially lead to deadlock. It requires knowledge of future process requests.
Two techniques to avoid deadlock :
1. Process initiation denial
2. Resource allocation denial
Advantages of deadlock avoidance techniques:
• Not necessary to pre-empt and rollback processes
• Less restrictive than deadlock prevention
Disadvantages :
• Future resource requirements must be known in advance
• Processes can be blocked for long periods
• Exists a fixed number of resources for allocation
3. Deadlock Detection: Deadlock detection is used by employing an algorithm that tracks the circular waiting and
kills one or more processes so that the deadlock is removed. The system state is examined periodically to
determine if a set of processes is deadlocked. A deadlock is resolved by aborting and restarting a process,
relinquishing all the resources that the process held.
• This technique does not limit resource access or restrict process action.
• Requested resources are granted to processes whenever possible.
• It never delays the process initiation and facilitates online handling.
• The disadvantage is the inherent pre-emption losses.
4. Deadlock Ignorance: In the Deadlock ignorance method the OS acts like the deadlock never occurs and
completely ignores it even if the deadlock occurs. This method only applies if the deadlock occurs very rarely. The
algorithm is very simple. It says ” if the deadlock occurs, simply reboot the system and act like the deadlock never
occurred.” That’s why the algorithm is called the Ostrich Algorithm.
Advantages:
• Ostrich Algorithm is relatively easy to implement and is effective in most cases.
• It helps in avoiding the deadlock situation by ignoring the presence of deadlocks.
Disadvantages:
• Ostrich Algorithm does not provide any information about the deadlock situation.
• It can lead to reduced performance of the system as the system may be blocked for a long time.
• It can lead to a resource leak, as resources are not released when the system is blocked due to
deadlock.
4 What is Dining Philosopher's problem. Explain the various scenarios that may occur in this problem.
You can watch this on Utube
5 Explain in detail priority inversion and its various solutions.
Priority inversion is a operating system scenario in which a higher priority process is
preempted by a lower priority process. This implies the inversion of the priorities of the two
processes.

Problems due to Priority Inversion


Some of the problems that occur due to priority inversion are given as follows −
• A system malfunction may occur if a high priority process is not provided the required resources.
• Priority inversion may also lead to implementation of corrective measures. These may include the
resetting of the entire system.
• The performance of the system can be reduces due to priority inversion. This may happen because
it is imperative for higher priority tasks to execute promptly.
• System responsiveness decreases as high priority tasks may have strict time constraints or real
time response guarantees.
• Sometimes there is no harm caused by priority inversion as the late execution of the high priority
process is not noticed by the system.
Solutions of Priority Inversion
Some of the solutions to handle priority inversion are given as follows −
• Priority Ceiling
All of the resources are assigned a priority that is equal to the highest priority of any task that may
attempt to claim them. This helps in avoiding priority inversion.
• Disabling Interrupts
There are only two priorities in this case i.e. interrupts disabled and preemptible. So priority
inversion is impossible as there is no third option.
• Priority Inheritance
This solution temporarily elevates the priority of the low priority task that is executing to the highest
priority task that needs the resource. This means that medium priority tasks cannot intervene and
lead to priority inversion.
• No blocking
Priority inversion can be avoided by avoiding blocking as the low priority task blocks the high
priority task.
• Random boosting
The priority of the ready tasks can be randomly boosted until they exit the critical section.
6 Discuss any two task synchronization issues.
7 Write a short notes on semaphores.

Semaphores are integer variables that are used to solve the critical section problem by using two atomic
operations, wait and signal that are used for process synchronization.
The definitions of wait and signal are as follows −
• Wait
The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero, then no operation
is performed.
wait(S)
{
while (S<=0);

S--;
}

• Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores. Details about
these are given as follows −
• Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These semaphores are used to
coordinate the resource access, where the semaphore count is the number of available resources. If the resources
are added, semaphore count automatically incremented and if the resources are removed, the count is decremented.
• Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The wait operation
only works when the semaphore is 1 and the signal operation succeeds when semaphore is 0. It is sometimes easier
to implement binary semaphores than counting semaphores.
Advantages of Semaphores
Some of the advantages of semaphores are as follows −
• Semaphores allow only one process into the critical section. They follow the mutual exclusion principle strictly
and are much more efficient than some other methods of synchronization.
• There is no resource wastage because of busy waiting in semaphores as processor time is not wasted
unnecessarily to check if a condition is fulfilled to allow a process to access the critical section.
• Semaphores are implemented in the machine independent code of the microkernel. So they are machine
independent.
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows −
• Semaphores are complicated so the wait and signal operations must be implemented in the correct order to
prevent deadlocks.
• Semaphores are impractical for last scale use as their use leads to loss of modularity. This happens because the
wait and signal operations prevent the creation of a structured layout for the system.
• Semaphores may lead to a priority inversion where low priority processes may access the critical section first and
high priority processes later.
9 Explain device drivers from operating system context.
10 Explain the different functional and non-functional requirements that needs to be evaluated in the
selection of an RTOS.

You might also like