Download as pdf or txt
Download as pdf or txt
You are on page 1of 50

Module I Introduction

Operating System:
An operating system (OS) is a software that acts as an intermediary between computer
hardware and the user. It manages computer hardware resources and provides services for
computer programs. The primary purpose of an operating system is to provide a platform on
which other software applications can run efficiently. It also facilitates communication between
hardware devices and software applications.
Operating systems perform various functions, including:
1. Process Management: This involves managing multiple processes running concurrently
on the system, allocating resources to them, and scheduling their execution.
2. Memory Management: The OS manages the system's memory, allocating memory
space to processes, and ensuring efficient utilization of available memory resources.
3. File System Management: It provides mechanisms for storing, retrieving, and
organizing files on storage devices such as hard disks, SSDs, and optical drives.
4. Device Management: The OS controls and coordinates the operation of peripheral
devices such as printers, scanners, keyboards, and network adapters.
5. User Interface: Operating systems provide user interfaces for interacting with the
computer. This can range from command-line interfaces (CLI) to graphical user
interfaces (GUI) or touch-based interfaces in mobile devices.
6. Security and Access Control: Operating systems enforce security policies, control
access to system resources, and protect the system from unauthorized access and
malicious software.
7. Networking: OS facilitates networking capabilities, enabling computers to
communicate with each other over networks such as LANs, WANs, and the internet.
8. Error Handling: It handles errors and exceptions that occur during system operation,
ensuring system stability and reliability.
Classification of Operating Systems:
1. Batch Operating System: Batch processing involves executing a series of jobs
(programs) without user interaction. Jobs are submitted to the system as a batch, and
the OS executes them one after another without user intervention.
2. Interactive Operating System: Interactive operating systems allow user interaction
through a user interface, enabling users to interact with the system in real-time,
providing immediate responses to user inputs.
3. Time-sharing Operating System: Time-sharing OS allows multiple users to share the
same system simultaneously. It divides the CPU time into small time slots, allowing
each user to have a fair share of CPU time for their tasks.
4. Real-Time Operating System (RTOS): RTOS is designed to handle tasks with strict
timing requirements. It guarantees timely response to events and ensures that critical
tasks are completed within specified deadlines.
5. Multiprocessor Systems: Multiprocessor operating systems manage computer
systems with multiple CPUs (Central Processing Units), distributing tasks among
processors to improve performance and scalability.
6. Multiuser Systems: Multiuser operating systems support multiple users accessing the
system concurrently, providing each user with a separate login session and resources.
7. Multithreaded Systems: Multithreading allows multiple threads within the same
process to execute concurrently. Multithreaded operating systems support the creation
and management of threads, improving system responsiveness and efficiency.
Operating System Structure:
The structure of an operating system consists of various components and services:
1. Kernel: The core component of the operating system responsible for essential functions
such as process management, memory management, and hardware abstraction.
2. System Components: These include device drivers, file systems, networking protocols,
and user interface components that work together to provide the operating system's
functionality.
3. Operating System Services: Services provided by the operating system to facilitate
application development and execution, including process management, memory
management, file system access, and input/output operations.
Definition: Operating System (OS) services are functionalities provided by the OS to enable
applications to interact with the underlying hardware and manage system resources
effectively.
Purpose: OS services abstract hardware complexities and provide a standardized interface
for applications to access resources such as CPU, memory, storage, and I/O devices. They
facilitate tasks such as process management, memory allocation, file system access,
device communication, and user interaction.
Examples of OS Services:
• Process Management: Services for creating, scheduling, and terminating
processes and threads.
• Memory Management: Services for allocating, deallocating, and protecting
memory space, including virtual memory management.
• File System Services: Services for organizing, storing, and accessing files and
directories on storage devices.
• Device Management: Services for managing interactions with hardware devices,
including device drivers and I/O operations.
• Networking Services: Services for network communication, including protocols,
socket management, and data transmission.
• Security Services: Services for enforcing access control, authentication,
encryption, and data protection.
• User Interface Services: Services for providing user interfaces, such as command-
line interpreters, window managers, and graphical toolkits.
4. Monolithic and Microkernel Systems: Operating systems can be structured as
monolithic kernels, where all operating system services run in kernel space, or
microkernel systems, where only essential services run in kernel space, and other
services run as user-space processes.

What is a kernel ?
The kernel is a computer program at the core of a computer’s operating system and has
complete control over everything in the system. It manages the operations of the computer and
the hardware.
There are five types of kernels :
1. A micro kernel, which only contains basic functionality;
2. A monolithic kernel, which contains many device drivers.
3. Hybrid Kernel
4. Exokernel
5. Nanokernel
But in this tutorial we will only look into Microkernel and Monolithic Kernel.
1. Microkernel :
kernel manages the operations of the computer, In microkernel the user services and kernel
services are implemented in different address space. The user services are kept in user address
space, and kernel services are kept under kernel address space.
2. Monolithic kernel :
In Monolithic kernel, the entire operating system runs as a single program in kernel mode. The
user services and kernel services are implemented in same address space.
Differences between Microkernel and Monolithic Kernel :

S.
Parameters Microkernel Monolithic kernel
No.

In microkernel, user services and In monolithic kernel, both user


1. Address Space kernel services are kept in services and kernel services are
separate address space. kept in the same address space.

Design and OS is easy to design and


2. OS is complex to design.
Implementation implement.

Monolithic kernel is larger than


3. Size Microkernel are smaller in size.
microkernel.

4. Functionality Easier to add new functionalities. Difficult to add new functionalities.

To design a microkernel, more Less code when compared to


5. Coding
code is required. microkernel

Failure of one component does Failure of one component in a


6. Failure not effect the working of micro monolithic kernel leads to the
kernel. failure of the entire system.

7. Processing Speed Execution speed is low. Execution speed is high.

It is not easy to extend monolithic


8. Extend It is easy to extend Microkernel.
kernel.
To implement IPC messaging Signals and Sockets are utilized to
9. Communication queues are used by the implement IPC in monolithic
communication microkernels. kernels.

10. Debugging Debugging is simple. Debugging is difficult.

Extra time and resources are


11. Maintain It is simple to maintain.
needed for maintenance.

Message forwarding and context Message passing and context


Message passing and
12. switching are required by the switching are not required while the
Context switching
microkernel. kernel is working.

The kernel only offers IPC and


The Kernel contains all of the
13. Services low-level device management
operating system’s services.
services.

14. Example Example : Mac OS X. Example : Microsoft Windows 95.


Module II Process Management
Process Concept:
Definition: The process concept in operating systems refers to the fundamental abstraction of a
running program. A process represents an instance of a program in execution, comprising
program code, data, and execution context, including program counter, register values, and
stack pointer.
Purpose:
1. Concurrency: Processes enable multiple tasks to run concurrently on a computer
system, allowing efficient utilization of CPU resources.
2. Isolation: Each process operates independently of others, providing isolation in terms
of memory space and resource access.
3. Resource Management: Processes manage system resources such as CPU time,
memory, and I/O devices, ensuring fair allocation and efficient utilization.
4. Protection: Processes are protected from interference by other processes, preventing
unauthorized access or modification of data.
5. Fault Isolation: Processes provide fault isolation, so a failure in one process does not
affect the operation of other processes.
Related Topics:
• Process States: Describes the various states a process can be in during its execution
lifecycle, such as running, ready, blocked, or terminated.
• Process Synchronization: Involves coordinating the execution of multiple processes to
avoid conflicts and ensure correct program behavior in concurrent environments.
• Process Scheduling: Determines the order in which processes are executed on the
CPU to maximize throughput and responsiveness.
• Interprocess Communication: Mechanism for processes to exchange data and
synchronize their actions, enabling collaboration and coordination among concurrent
processes.
• Threads and their Management: Lightweight processes that share the same memory
space and resources within a process, increasing concurrency and responsiveness in
applications.

1. Process Concept (Highly Detailed):


A process is the cornerstone of work execution within a computer system. It's an encapsulated
instance of a program in action, comprising several critical elements:
• Program Code: This is the core set of instructions that define the program's
functionality. These instructions reside in memory and are fetched by the CPU for
execution in a sequential manner.
• Data Structures: Variables and other data elements used by the program during
execution are vital for storing and manipulating information. Data structures can be
located in memory (stack, heap) or accessed from storage devices (files).
• Process State: The process state reflects its current status in its lifecycle. Common
states include:
o New: The process has been created but not yet admitted to the system. It's
typically waiting for resource allocation (memory, devices), initialization tasks to
complete, or admission control checks by the operating system.
o Running: The process is actively executing instructions on the CPU, having
acquired all necessary resources. It has the CPU's undivided attention for a
specific time slice, determined by the scheduling algorithm.
o Ready: The process is prepared to run but is currently waiting for the CPU. It has
all other resources required and resides in a ready queue, waiting its turn for
CPU allocation based on the scheduling policy. Common reasons for entering
the ready state include:
▪ Expiration of the process's CPU time slice.
▪ Completion of an I/O operation, making the process ready for further
processing.
▪ Release of a lock or resource by another process, allowing the waiting
process to proceed.
▪ Preemption by the operating system to grant CPU time to a higher-priority
process.
o Waiting: The process is blocked due to an external event and cannot proceed
until that event occurs. It has relinquished control of the CPU and is suspended.
Common reasons for entering the waiting state include:
▪ I/O Wait: The process has issued an I/O request (e.g., reading from a
disk) and needs to wait for the data transfer to complete before
continuing execution.
▪ Synchronization Wait: The process is waiting for a lock or another
process to finish accessing a shared resource to prevent race conditions
and data inconsistencies.
▪ Event Wait: The process is suspended, waiting for a specific event to
occur (e.g., a signal from another process, completion of a timer) before
proceeding.
o Terminated: The process has finished execution, and its resources are
reclaimed by the operating system. This can happen due to various reasons,
such as the program reaching its normal completion point, encountering an
error, or being explicitly terminated by the user or the operating system. The
operating system removes the process from the process table and releases its
associated resources.
• Program Counter (PC): This register acts as a pointer, holding the memory address of
the next instruction to be executed. The CPU retrieves instructions from this memory
location and executes them sequentially.
• Stack: A LIFO (Last-In-First-Out) data structure, the stack is essential for managing
function calls and local variables within a program. It stores temporary data, function
call arguments, and return addresses. When a function is called, its arguments and
local variables are pushed onto the stack. When the function returns, the corresponding
information is popped off, allowing the program to resume execution from the calling
point.
• Heap: A dynamically allocated memory region, the heap is used for storing data
structures and objects created during program execution. Processes can allocate and
deallocate memory from the heap as needed. This provides flexibility for programs with
memory requirements that may change during execution.
• Open Files: References to files currently being accessed by the process are maintained.
These references provide the process with a way to read from, write to, and manipulate
files on the storage device. The operating system keeps track of these open files to
ensure proper access control and resource management.
• Environment Variables: These are configuration variables that can influence the
process's behavior. They can be set by the user or the operating system and can affect
how the program executes. For example, environment variables might specify the path
to libraries, temporary directories, or other system settings that the program needs.
Key Components of Process Management
Below are some key component of process management.
• Process mapping: Creating visual representations of processes to understand how
tasks flow, identify dependencies, and uncover improvement opportunities.
• Process analysis: Evaluating processes to identify bottlenecks, inefficiencies, and
areas for improvement.
• Process redesign: Making changes to existing processes or creating new ones to
optimize workflows and enhance performance.
• Process implementation: Introducing the redesigned processes into the organization
and ensuring proper execution.
• Process monitoring and control: Tracking process performance, measuring key
metrics, and implementing control mechanisms to maintain efficiency and
effectiveness.
Characteristics of a Process
A process has the following attributes.
• Process Id: A unique identifier assigned by the operating system.
• Process State: Can be ready, running, etc.
• CPU registers: Like the Program Counter (CPU registers must be saved and restored
when a process is swapped in and out of the CPU)
• Accounts information: Amount of CPU used for process execution, time limits,
execution ID, etc
• I/O status information: For example, devices allocated to the process, open files, etc
• CPU scheduling information: For example, Priority (Different processes may have
different priorities, for example, a shorter process assigned high priority in the shortest
job first scheduling)
All of the above attributes of a process are also known as the context of the process. Every
process has its own process control block(PCB), i.e. each process will have a unique PCB. All of
the above attributes are part of the PCB.
States of Process
A process is in one of the following states:
• New: Newly Created Process (or) being-created process.
• Ready: After the creation process moves to the Ready state, i.e. the process is ready for
execution.
• Run: Currently running process in CPU (only one process at a time can be under
execution in a single processor)
• Wait (or Block): When a process requests I/O access.
• Complete (or Terminated): The process completed its execution.
• Suspended Ready: When the ready queue becomes full, some processes are moved to
a suspended ready state
• Suspended Block: When the waiting queue becomes full.

Context Switching of Process


The process of saving the context of one process and loading the context of another process is
known as Context Switching. In simple terms, it is like loading and unloading the process from
the running state to the ready state.
When Does Context Switching Happen?
1. When a high-priority process comes to a ready state (i.e. with higher priority than the running
process).
2. An Interrupt occurs.
3. User and kernel-mode switch (It is not necessary though)
4. Preemptive CPU scheduling is used.
Context Switch vs Mode Switch
A mode switch occurs when the CPU privilege level is changed, for example when a system call
is made or a fault occurs. The kernel works in more a privileged mode than a standard user task.
If a user process wants to access things that are only accessible to the kernel, a mode switch
must occur. The currently executing process need not be changed during a mode switch. A
mode switch typically occurs for a process context switch to occur. Only the kernel can cause a
context switch.
CPU-Bound vs I/O-Bound Processes
A CPU-bound process requires more CPU time or spends more time in the running state. An I/O-
bound process requires more I/O time and less CPU time. An I/O-bound process spends more
time in the waiting state.
Process planning is an integral part of the process management operating system. It refers to
the mechanism used by the operating system to determine which process to run next. The goal
of process scheduling is to improve overall system performance by maximizing CPU utilization,
minimizing execution time, and improving system response time.
Process Scheduling Algorithms
The operating system can use different scheduling algorithms to schedule processes. Here are
some commonly used timing algorithms:
• First-come, first-served (FCFS): This is the simplest scheduling algorithm, where the
process is executed on a first-come, first-served basis. FCFS is non-preemptive, which
means that once a process starts executing, it continues until it is finished or waiting for
I/O.
• Shortest Job First (SJF): SJF is a proactive scheduling algorithm that selects the process
with the shortest burst time. The burst time is the time a process takes to complete its
execution. SJF minimizes the average waiting time of processes.
• Round Robin (RR): Round Robin is a proactive scheduling algorithm that reserves a
fixed amount of time in a round for each process. If a process does not complete its
execution within the specified time, it is blocked and added to the end of the queue. RR
ensures fair distribution of CPU time to all processes and avoids starvation.
• Priority Scheduling: This scheduling algorithm assigns priority to each process and the
process with the highest priority is executed first. Priority can be set based on process
type, importance, or resource requirements.
• Multilevel queue: This scheduling algorithm divides the ready queue into several
separate queues, each queue having a different priority. Processes are queued based
on their priority, and each queue uses its own scheduling algorithm. This scheduling
algorithm is useful in scenarios where different types of processes have different
priorities.
Advantages of Process Management
• Improved Efficiency: Process management can help organizations identify bottlenecks
and inefficiencies in their processes, allowing them to make changes to streamline
workflows and increase productivity.
• Cost Savings: By identifying and eliminating waste and inefficiencies, process
management can help organizations reduce costs associated with their business
operations.
• Improved Quality: Process management can help organizations improve the quality of
their products or services by standardizing processes and reducing errors.
• Increased Customer Satisfaction: By improving efficiency and quality, process
management can enhance the customer experience and increase satisfaction.
• Compliance with Regulations: Process management can help organizations comply
with regulatory requirements by ensuring that processes are properly documented,
controlled, and monitored.
Disadvantages of Process Management
• Time and Resource Intensive: Implementing and maintaining process management
initiatives can be time-consuming and require significant resources.
• Resistance to Change: Some employees may resist changes to established processes,
which can slow down or hinder the implementation of process management initiatives.
• Overemphasis on Process: Overemphasis on the process can lead to a lack of focus
on customer needs and other important aspects of business operations.
• Risk of Standardization: Standardizing processes too much can limit flexibility and
creativity, potentially stifling innovation.
• Difficulty in Measuring Results: Measuring the effectiveness of process management
initiatives can be difficult, making it challenging to determine their impact on
organizational performance.
What is Process Scheduling?
Process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process based on a particular
strategy.
Process scheduling is an essential part of a Multiprogramming operating system. Such
operating systems allow more than one process to be loaded into the executable memory at a
time and the loaded process shares the CPU using time multiplexing.

Process scheduler
Categories of Scheduling
Scheduling falls into one of two categories:
• Non-preemptive: In this case, a process’s resource cannot be taken before the process
has finished running. When a running process finishes and transitions to a waiting state,
resources are switched.
• Preemptive: In this case, the OS assigns resources to a process for a predetermined
period. The process switches from running state to ready state or from waiting for state
to ready state during resource allocation. This switching happens because the CPU may
give other processes priority and substitute the currently active process for the higher
priority process.
Types of Process Schedulers
There are three types of process schedulers:
1. Long Term or Job Scheduler
It brings the new process to the ‘Ready State’. It controls the Degree of Multi-programming, i.e.,
the number of processes present in a ready state at any point in time. It is important that the
long-term scheduler make a careful selection of both I/O and CPU-bound processes. I/O-bound
tasks are which use much of their time in input and output operations while CPU-bound
processes are which spend their time on the CPU. The job scheduler increases efficiency by
maintaining a balance between the two. They operate at a high level and are typically used in
batch-processing systems.
2. Short-Term or CPU Scheduler
It is responsible for selecting one process from the ready state for scheduling it on the running
state. Note: Short-term scheduler only selects the process to schedule it doesn’t load the
process on running. Here is when all the scheduling algorithms are used. The CPU scheduler is
responsible for ensuring no starvation due to high burst time processes.

Short Term Scheduler


The dispatcher is responsible for loading the process selected by the Short-term scheduler on
the CPU (Ready to Running State) Context switching is done by the dispatcher only. A dispatcher
does the following:
• Switching context.
• Switching to user mode.
• Jumping to the proper location in the newly loaded program.
3. Medium-Term Scheduler
It is responsible for suspending and resuming the process. It mainly does swapping (moving
processes from main memory to disk and vice versa). Swapping may be necessary to improve
the process mix or because a change in memory requirements has overcommitted available
memory, requiring memory to be freed up. It is helpful in maintaining a perfect balance between
the I/O bound and the CPU bound. It reduces the degree of multiprogramming.

Medium Term Scheduler


Some Other Schedulers
• I/O schedulers: I/O schedulers are in charge of managing the execution of I/O
operations such as reading and writing to discs or networks. They can use various
algorithms to determine the order in which I/O operations are executed, such
as FCFS (First-Come, First-Served) or RR (Round Robin).
• Real-time schedulers: In real-time systems, real-time schedulers ensure that critical
tasks are completed within a specified time frame. They can prioritize and schedule
tasks using various algorithms such as EDF (Earliest Deadline First) or RM (Rate
Monotonic).

Comparison Among Scheduler


Long Term Scheduler Short term schedular Medium Term Scheduler
It is a job scheduler It is a CPU scheduler It is a process-swapping
scheduler.
Generally, Speed is lesser than Speed is the fastest among all of Speed lies in between both short
short term scheduler them. and long-term schedulers.

It controls the degree of It gives less control over how It reduces the degree of
multiprogramming much multiprogramming is multiprogramming.
done.
It is barely present or It is a minimal time-sharing It is a component of systems for
nonexistent in the time-sharing system. time sharing.
system.
It can re-enter the process into It selects those processes It can re-introduce the process
memory, allowing for the which are ready to execute into memory and execution can
continuation of execution. be continued.
Two-State Process Model Short-Term
The terms “running” and “non-running” states are used to describe the two-state process
model.
1. Running: A newly created process joins the system in a running state when it is created.
2. Not running: Processes that are not currently running are kept in a queue and await
execution. A pointer to a specific process is contained in each entry in the queue. Linked
lists are used to implement the queue system. This is how the dispatcher is used. When
a process is stopped, it is moved to the back of the waiting queue. The process is
discarded depending on whether it succeeded or failed. The dispatcher then chooses a
process to run from the queue in either scenario.
Context Switching
In order for a process execution to be continued from the same point at a later time, context
switching is a mechanism to store and restore the state or context of a CPU in the Process
Control block. A context switcher makes it possible for multiple processes to share a single CPU
using this method. A multitasking operating system must include context switching among its
features.The state of the currently running process is saved into the process control block when
the scheduler switches the CPU from executing one process to another. The state used to set
the computer, registers, etc. for the process that will run next is then loaded from its own PCB.
After that, the second can start processing.

Context Switching
In order for a process execution to be continued from the same point at a later time, context
switching is a mechanism to store and restore the state or context of a CPU in the Process
Control block. A context switcher makes it possible for multiple processes to share a single CPU
using this method. A multitasking operating system must include context switching among its
features.
• Program Counter
• Scheduling information
• The base and limit register value
• Currently used register
• Changed State
• I/O State information
• Accounting information

Process Synchronization:
Process Synchronization was introduced to handle problems that arose while multiple process
executions. It is the task phenomenon of coordinating the execution of processes in such a way
that no two processes can have access to the same shared data and resources.
Process is categorized into two types on the basis of synchronization, and these are given
below:
Independent Process
Two processes are said to be independent if the execution of one process does not affect the
execution of another process.
Cooperative Process
Two processes are said to be cooperative if the execution of one process affects the execution of another
process. These processes need to be synchronized so that the order of execution can be guaranteed.

Process synchronization is used for following reasons:


• It is a procedure that is involved in order to preserve the appropriate order of execution
of cooperative processes.
• In order to synchronize the processes, there are various synchronization mechanisms.
• Process Synchronization is mainly needed in a multi-process system when multiple
processes are running together, and more than one processes try to gain access to the
same shared resource or any data at the same time.

Race Condition
At the time when more than one process is either executing the same code or accessing the
same memory or any shared variable; In that condition, there is a possibility that the output or
the value of the shared variable is wrong so for that purpose all the processes are doing the race
to say that my output is correct. This condition is commonly known as a race condition.

Critical Section:
A Critical Section is a code segment that accesses shared variables and has to be executed as
an atomic action. It means that in a group of cooperating processes, at a given point of time,
only one process must be executing its critical section. If any other process also wants to
execute its critical section, it must wait until the first one finishes. The entry to the critical
section is mainly handled by wait() function while the exit from the critical section is controlled
by the signal() function.

Mainly race condition is a situation that may occur inside the critical section. Race condition
in the critical section happens when the result of multiple thread execution differs according to
the order in which the threads execute. But this condition is critical sections can be avoided if
the critical section is treated as an atomic instruction.

The solution to the Critical Section Problem:

A solution to the critical section problem must satisfy the following three conditions:
1. Mutual Exclusion
Out of a group of cooperating processes, only one process can be in its critical section at a given
point of time.
2. Progress
If no process is in its critical section, and if one or more threads want to execute their critical
section then any one of these threads must be allowed to get into its critical section.
3. Bounded Waiting
After a process makes a request for getting into its critical section, there is a limit for how many
other processes can get into their critical section, before this process's request is granted. So
after the limit is reached, the system must grant the process permission to get into its critical
section.

Classical Problems of Synchronization

These problems are used for testing nearly every newly proposed synchronization scheme. The
following problems of synchronization are considered as classical problems:
1. Bounded-buffer (or Producer-Consumer) Problem,
2. Dining-Philosophers Problem,
3. Readers and Writers Problem,
4. Sleeping Barber Problem

Bounded-Buffer (or Producer-Consumer) Problem

This is also called the producer-consumer problem. This problem is generalized in terms of the
Producer-Consumer problem. The solution to this problem is, to create two counting
semaphores “full” and “empty” to keep track of the current number of full and empty buffers
respectively. Producers produce a product and consumers consume the product, but both use
of one of the containers each time.

Problem : To make sure that the producer won’t try to add data into the buffer if it’s full and that
the consumer won’t try to remove data from an empty buffer.

Solution : The producer is to either go to sleep or discard data if the buffer is full. The next time
the consumer removes an item from the buffer, it notifies the producer, who starts to fill the
buffer again. In the same way, the consumer can go to sleep if it finds the buffer to be empty.
The next time the producer puts data into the buffer, it wakes up the sleeping consumer.

Schedulers
• Short-term scheduler (or CPU scheduler) – selects which process should be
executed next and allocates CPU
o Sometimes the only scheduler in a system
o Short-term scheduler is invoked frequently (milliseconds)  (must be fast)
• Long-term scheduler (or job scheduler) – selects which processes should be
brought into the ready queue
o Long-term scheduler is invoked infrequently (seconds, minutes)  (may be
slow)
o The long-term scheduler controls the degree of multiprogramming
• Processes can be described as either:
o I/O-bound process – spends more time doing I/O than computations, many
short CPU bursts
o CPU-bound process – spends more time doing computations; few very long
CPU bursts
• Long-term scheduler strives for good process mix
3. Process Synchronization: Ensuring Data Consistency
Process synchronization is a vital mechanism that controls access to shared resources (e.g.,
data structures, memory locations) by multiple processes. This ensures data consistency and
prevents issues like race conditions.
• Race Condition: A scenario where the outcome of a program depends on the
unpredictable timing of process execution. For instance, imagine two processes
updating the same counter variable simultaneously. Without synchronization, one
process's update might be overwritten by the other, leading to an incorrect final value.
Process synchronization aims to achieve:
• Mutual Exclusion: Guarantees that only one process can be in its critical section (code
that modifies shared resources) at a time. This prevents race conditions.
• Orderly Access: Ensures controlled and predictable access to shared resources by
processes.
• Data Consistency: Maintains the integrity of shared data by preventing processes from
interfering with each other's modifications.
Related Topics:
• Critical Section: A specific section of a process's code that accesses shared resources.
Only one process can be in its critical section at any given time.
• Mutual Exclusion Mechanisms: Techniques like semaphores, mutexes, and monitors
are used to implement mutual exclusion. These mechanisms allow processes to
acquire and release locks on shared resources.
• Classical Synchronization Problems: Well-defined problems like the Producer-
Consumer Problem and Reader-Writer Problem showcase the complexities of
synchronization and solutions. These problems illustrate the challenges of coordinating
access to shared resources between multiple processes.
4. Process Scheduling: Optimizing CPU Utilization
Process scheduling is the act of allocating CPU time to processes within an OS. It's a critical
function that determines the order in which processes are executed, impacting system
performance metrics like throughput (amount of work done) and response time (time taken to
respond to user requests).
Process Transitions:
Processes move between states based on events (e.g., I/O completion, resource availability)
and scheduling decisions. For example, a process transitions from running to waiting if it
encounters an I/O wait; it transitions from ready to running when the scheduler assigns it CPU
time.
5. Scheduling Algorithms: Prioritizing Process Execution
• First-Come, First-Served (FCFS) (Continued):
o While simple to implement and fair for short processes, FCFS can lead to
starvation for shorter processes if longer ones arrive first. Processes with long
execution times can monopolize the CPU, causing longer waiting times for
subsequent processes.
• Shortest Job First (SJF):
o The process with the shortest estimated execution time is given priority. The
scheduler attempts to pick the process that will finish the fastest, minimizing the
overall average waiting time.
o While this can improve average waiting time, SJF relies on accurate estimations
of process execution times, which might not always be available. It can also lead
to starvation for longer processes if a stream of short processes keeps entering
the ready queue. Scheduling overhead can be high due to the need to constantly
evaluate process execution times.
• Priority Scheduling:
o Processes are assigned priorities. The process with the highest priority gets CPU
time first. Priorities can be static (predefined) or dynamic (adjusted based on
process behavior).
o This approach allows prioritizing critical processes that require faster execution.
However, it can lead to starvation for lower-priority processes if higher-priority
processes are continuously submitted. Improper priority assignment can lead to
unfairness.
• Round-Robin (RR):
o Processes are allocated CPU time in short slices (time quanta). Once a process
uses its allocated time quantum, it is preempted (paused) and placed at the
back of the ready queue. The scheduler then moves on to the next process in the
queue. If a process finishes its execution before its time quantum expires, it
relinquishes the CPU voluntarily.
o RR ensures fairness by giving all processes a chance to run and provides a quick
turnaround time for processes, making it responsive to user interaction.
However, frequent context switching between processes can add overhead.
Processes with long execution times might not be able to finish within a single
time quantum, leading to longer overall execution times.
Choosing the Right Algorithm:
The optimal scheduling algorithm depends on the specific needs of the system and the types of
processes it runs. Here are some factors to consider:
• Response time: If low response time is critical (e.g., interactive systems), RR or priority
scheduling with high priority for interactive processes might be suitable.
• Throughput: If maximizing the number of completed processes per unit time is the goal
(e.g., batch processing systems), SJF or priority scheduling with high priority for short
processes might be preferable.
• Fairness: If ensuring fair allocation of CPU time to all processes is important, FCFS or
RR might be better choices.
Additional Considerations:
• Multilevel Queue Scheduling: Combines multiple scheduling algorithms at different
priority levels. For example, a system might use RR for short processes in one queue and
priority scheduling for longer processes in another.
• Real-Time Scheduling: Used in systems with strict timing constraints (e.g., embedded
systems). These algorithms prioritize processes based on deadlines to ensure timely
completion of critical tasks.
6. Interprocess Communication (IPC): Enabling Process Collaboration
IPC mechanisms allow processes to communicate and exchange data with each other. This is
essential for tasks like:
• Sharing data between processes (e.g., passing data between a web server and a
database process)
• Coordinating activities between processes (e.g., a parent process spawning a child
process and waiting for it to finish)
• Building modular systems (e.g., processes acting as independent components that
communicate to achieve a larger goal)
Common IPC mechanisms include:
• Shared Memory: Processes can access and modify a shared region of memory.
• Pipes: Processes can write data to one end of a pipe, and another process can read
data from the other end.
• Message Queues: Processes can send messages to a queue, and other processes can
receive messages from the queue.
• Semaphores: Used for synchronization and to signal events between processes.
7. Threads and Their Management:
Threads are lightweight units of execution within a process. A single process can have multiple
threads, allowing it to perform multiple tasks concurrently. Threads share the same memory
space and resources of the process they belong to. This makes them more efficient for certain
tasks compared to creating separate processes.
Thread Management:
• OS manages thread creation, scheduling, and synchronization.
• Threads within a process can cooperate and share data easily due to their shared
memory space.
• Synchronization mechanisms like mutexes are still necessary to prevent race conditions
when multiple threads within a process access shared data.
8. Security Issues in Process Management :
• Unauthorized Process Creation: Malicious programs might attempt to create
unauthorized processes to gain access to system resources or perform harmful actions.
• Privilege Escalation: A process might exploit vulnerabilities to elevate its privileges and
gain unauthorized access to sensitive system resources.
• Resource Starvation: A malicious process might consume excessive resources (CPU,
memory) and starve legitimate processes, degrading system performance.
• Information Leaks: Processes might inadvertently leak sensitive information through
shared memory or IPC mechanisms if not properly secured.
Security Measures in Process Management:
• Access Control Mechanisms: The OS can enforce access control mechanisms to
restrict which users or processes can create new processes and what resources they
can access.
• User Isolation: Techniques like user accounts and virtual memory can isolate
processes from each other, preventing them from accessing each other's resources or
modifying system files.
• Least Privilege Principle: Processes should be granted only the minimum privileges
necessary for them to function correctly. This minimizes the potential damage if a
process is compromised.
• Security Audits and Monitoring: Regularly monitor system activity and perform
security audits to identify suspicious process behavior that might indicate security
threats.
Module III CPU Scheduling
CPU Scheduling Criteria
CPU scheduling is essential for the system’s performance and ensures that processes are
executed correctly and on time. Different CPU scheduling algorithms have other properties and
the choice of a particular algorithm depends on various factors. Many criteria have been
suggested for comparing CPU scheduling algorithms.
What is CPU scheduling?
CPU Scheduling is a process that allows one process to use the CPU while another process is
delayed due to unavailability of any resources such as I / O etc, thus making full use of the CPU.
In short, CPU scheduling decides the order and priority of the processes to run and allocates
the CPU time based on various parameters such as CPU usage, throughput, turnaround, waiting
time, and response time. The purpose of CPU Scheduling is to make the system more efficient,
faster, and fairer.
Criteria of CPU Scheduling
CPU Scheduling has several criteria. Some of them are mentioned below.
1. CPU utilization
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible.
Theoretically, CPU utilization can range from 0 to 100 but in a real-time system, it varies from 40
to 90 percent depending on the load upon the system.
2. Throughput
A measure of the work done by the CPU is the number of processes being executed and
completed per unit of time. This is called throughput. The throughput may vary depending on
the length or duration of the processes.

CPU Scheduling Criteria


3. Turnaround Time
For a particular process, an important criterion is how long it takes to execute that process. The
time elapsed from the time of submission of a process to the time of completion is known as
the turnaround time. Turn-around time is the sum of times spent waiting to get into memory,
waiting in the ready queue, executing in CPU, and waiting for I/O.
Turn Around Time = Completion Time – Arrival Time.

4. Waiting Time
A scheduling algorithm does not affect the time required to complete the process once it starts
execution. It only affects the waiting time of a process i.e. time spent by a process waiting in the
ready queue.
Waiting Time = Turnaround Time – Burst Time.
5. Response Time
In an interactive system, turn-around time is not the best criterion. A process may produce
some output fairly early and continue computing new results while previous results are being
output to the user. Thus another criterion is the time taken from submission of the process of
the request until the first response is produced. This measure is called response time.
Response Time = CPU Allocation Time(when the CPU was allocated for the first) – Arrival
Time

6. Completion Time
The completion time is the time when the process stops executing, which means that the
process has completed its burst time and is completely executed.
7. Priority
If the operating system assigns priorities to processes, the scheduling mechanism should favor
the higher-priority processes.
8.Round Robin scheduling algorithm works well in a time-sharing system where tasks have to
be completed in a short period of time. SJF scheduling algorithm works best in a batch
processing system where shorter jobs have to be completed first in order to increase
throughput.Priority scheduling algorithm works better in a real-time system where certain tasks
have to be prioritized so that they can be completed in a timely manner.
Factors Influencing CPU Scheduling Algorithms
There are many factors that influence the choice of CPU scheduling algorithm. Some of them
are listed below.
• The number of processes.
• The processing time required.
• The urgency of tasks.
• The system requirements.
Selecting the correct algorithm will ensure that the system will use system resources efficiently,
increase productivity, and improve user satisfaction.
CPU Scheduling Algorithms
There are several CPU Scheduling Algorithms, that are listed below.
• First Come First Served (FCFS)
• Shortest Job First (SJF)
• Longest Job First (LJF)
• Priority Scheduling
• Round Robin (RR)
• Shortest Remaining Time First (SRTF)
• Longest Remaining Time First (LRTF)
1.Fundamental Scheduling Concepts:
• Process: An active entity representing a distinct unit of work within the OS. Processes
consist of code, data, and a program counter that tracks execution progress. They
compete for various resources, particularly CPU time, to complete their assigned tasks.
• CPU Burst: The amount of CPU time a process requires to execute continuously without
interruption. This burst time represents the period the process actively uses the CPU for
computations.
• Waiting Time: The duration a process spends in the ready queue, awaiting its turn to be
allocated the CPU. Processes in the ready queue are prepared to run but are stalled due
to unavailability of the CPU.
• Turnaround Time: The total time it takes for a process to complete execution, from the
moment it's submitted to the OS until its completion. It encompasses the time spent
waiting in queues (ready, I/O), performing computations on the CPU, and any I/O
operations.
• Response Time: The time it takes for the OS to begin responding to a process's request
for CPU time after submission. This includes the time for the process to enter the ready
queue and potentially the time spent waiting in the queue before acquiring the CPU.
• Context Switching: The overhead associated with switching the CPU from one running
process to another. This involves saving the state of the previous process (registers,
memory) and restoring the state of the new process to prepare it for execution.
2.Scheduling Techniques:
• Preemptive Scheduling: The OS can dynamically reclaim the CPU from a running
process with lower priority and grant it to another process with higher priority. This
allows for flexible prioritization based on process needs. The preempted process is then
added back to the ready queue and will get another chance to run when its turn comes
again.
• Non-Preemptive Scheduling: Once allocated the CPU, a process retains control until it
finishes its CPU burst or voluntarily relinquishes the CPU through system calls or I/O
wait. This approach offers simplicity but might lead to starvation for lower-priority
processes if a high-priority process has a long CPU burst.
3.Techniques of Scheduling:
Definition: Techniques of scheduling refer to the different strategies and algorithms used to
schedule processes for execution on the CPU.
• Preemptive Scheduling: Scheduling algorithms that allow the operating
system to interrupt a currently executing process to start or resume
another process. Examples include Round Robin and Shortest
Remaining Time First.
• Non-Preemptive Scheduling: Scheduling algorithms where the
currently executing process cannot be interrupted until it completes its
CPU burst. Examples include First-Come-First-Serve and Shortest Job
First.
• Multiprogramming: Technique where multiple processes are loaded into
main memory simultaneously for concurrent execution. It increases CPU
utilization and throughput by keeping the CPU busy with useful work.
• Multitasking: Technique allowing multiple processes to execute
concurrently by rapidly switching between them on the CPU. It gives the
illusion of parallel execution to users and improves system
responsiveness.

2. Preemptive and Non-Preemptive Scheduling:


• Definition: Preemptive scheduling involves interrupting the execution of a
process to start or resume another process, while non-preemptive scheduling
allows a process to continue executing until it voluntarily relinquishes the CPU.
• Related Subtopics:
• First-Come-First-Serve (FCFS): Simplest scheduling algorithm where
processes are executed in the order they arrive in the ready queue. It
suffers from poor turnaround time, especially for long-running
processes.
• Shortest Request Next (SRN): Scheduling algorithm that prioritizes
processes based on their expected CPU burst time. It minimizes average
waiting time but may lead to starvation for long-running processes.
• Highest Response Ratio Next (HRRN): Scheduling algorithm that
considers the ratio of the waiting time and CPU burst time to prioritize
processes. It provides better turnaround time for both short and long
processes.
• Round Robin (RR): Preemptive scheduling algorithm where each
process is executed for a fixed time slice (quantum) before being
preempted and moved to the end of the ready queue. It ensures fair CPU
allocation but may suffer from high context switch overhead.
• Least Complete Next (LCN): Scheduling algorithm that prioritizes
processes based on their remaining CPU burst time. It aims to minimize
the remaining time for each process to complete.
• Shortest Time to Go (STG): Scheduling algorithm that selects the
process with the shortest remaining CPU burst time. It ensures that the
process with the least amount of work remaining is executed next.
• Long, Medium, Short Scheduling (LMSS): Strategy for classifying
processes based on their CPU burst characteristics and selecting
appropriate scheduling algorithms. It aims to optimize performance for
different types of workloads.
• Priority Scheduling: Scheduling algorithm where processes are
assigned priorities, and the highest priority process is selected for
execution. It ensures that high-priority tasks are executed with minimal
delay.
3. Deadlock:
• Definition: Deadlock is a situation where two or more processes are unable to
proceed because each is waiting for the other to release a resource.
• Related Subtopics:
• System Model: Describes the resources, processes, and interactions in
the system that can lead to deadlock. It helps identify potential deadlock
scenarios and develop strategies to prevent or resolve them.
• Deadlock Characterization: Identifying the conditions necessary for
deadlock to occur, such as mutual exclusion, hold and wait, no
preemption, and circular wait. Understanding these conditions is crucial
for deadlock prevention and detection.
• Prevention: Techniques for preventing deadlock by eliminating one or
more necessary conditions for its occurrence. Examples include
resource allocation strategies, such as bankers' algorithm, and avoiding
circular wait.
• Avoidance: Strategies for avoiding deadlock by dynamically allocating
resources in a way that ensures safety and avoids deadlock-prone
situations. Techniques such as resource allocation graphs and deadlock
avoidance algorithms are used to detect and prevent potential
deadlocks.
• Detection: Methods for detecting deadlock when it occurs, such as
resource allocation graphs and deadlock detection algorithms. These
techniques help identify deadlock situations and take appropriate
actions to resolve them.
• Recovery from Deadlock: Procedures for recovering from deadlock,
such as process termination, resource preemption, and rollback. These
techniques help restore system functionality and prevent prolonged
deadlock situations.
Module IV Memory Management
1. Memory Partition:
• Definition: Memory partitioning involves dividing the physical memory
into fixed-size or variable-size partitions to accommodate multiple
processes simultaneously. It's a fundamental aspect of memory
management in operating systems.
• Related Subtopics:
• Fixed-size Partitioning: With fixed-size partitioning, the memory is
divided into partitions of equal size. Each partition can hold exactly
one process. This approach is relatively simple but can lead to
internal fragmentation, where memory is allocated to a process
but remains unused.
• Variable-size Partitioning: Variable-size partitioning allows for
more flexibility by dividing memory into partitions of different sizes.
This can lead to more efficient memory utilization as processes
can be allocated memory that closely matches their actual size.
However, it requires dynamic memory management to allocate
and deallocate memory efficiently.
• Hole Allocation: Hole allocation involves finding suitable "holes"
or free memory regions in the memory space to allocate to
processes. The goal is to find a hole that is large enough to
accommodate the process. Memory allocation algorithms, such
as first fit, best fit, and worst fit, are used to find the most suitable
hole for allocation.
• Memory Compaction: Memory compaction is a technique used to
reduce fragmentation by rearranging memory contents to create
larger contiguous free memory blocks. This is done by moving
allocated memory blocks and merging adjacent free blocks.
Compaction helps to maximize available memory for allocation
and reduce the impact of fragmentation.
2. Memory Management Techniques:
• Definition: Memory management techniques are strategies and
mechanisms used to allocate and deallocate memory to processes
efficiently, ensuring optimal utilization of available memory resources.
• Related Subtopics:
• Paging: Paging divides physical memory and processes into fixed-
size blocks called pages. This simplifies memory management and
address translation, as each page can be managed independently.
Paging allows for efficient use of physical memory by allocating
memory on a per-page basis.
• Segmentation: Segmentation divides processes into logical
segments of variable sizes, such as code, data, and stack
segments. This provides a flexible memory allocation scheme,
allowing each segment to grow or shrink dynamically as needed.
However, segmentation can lead to external fragmentation, where
free memory becomes fragmented into small, unusable chunks.
•Virtual Memory: Virtual memory allows processes to use more
memory than physically available by utilizing disk space as an
extension of RAM. It provides benefits such as increased address
space, improved multitasking, and the ability to run large programs
that would otherwise not fit into physical memory.
• Demand Paging: Demand paging is a virtual memory technique
where pages are loaded into memory only when they are needed.
This reduces the initial loading time and memory wastage by
loading only the required pages into memory. Pages are loaded on
demand, based on the program's access patterns and memory
requirements.
3. Page Replacement Algorithms:
• Definition: Page replacement algorithms are used in virtual memory
systems to decide which page to evict from memory when a new page
needs to be loaded.
• Related Subtopics:
• FIFO (First-In-First-Out) Algorithm: FIFO replaces the oldest page
in memory, i.e., the page that was brought into memory first. While
simple to implement, FIFO may suffer from the Belady's anomaly,
where increasing the number of frames may increase the number
of page faults.
• Least Recently Used (LRU) Algorithm: LRU replaces the page that
has not been accessed for the longest time. It requires maintaining
access timestamps for each page and can be implemented using
counters or linked lists. LRU is effective in minimizing page faults
but may be computationally expensive.
• Optimal Algorithm: Optimal replaces the page that will not be
used for the longest period in the future. While ideal for minimizing
page faults, Optimal is impractical to implement as it requires
knowledge of future page accesses.
• Clock (Second Chance) Algorithm: Clock enhances FIFO by
giving a second chance to pages that would have been replaced
under FIFO. It uses a clock hand to mark pages and replaces the
first unmarked page encountered.
• Least Frequently Used (LFU) Algorithm: LFU replaces the page
with the least number of accesses. It requires maintaining a
counter for each page to track access frequency, which can be
computationally expensive but can effectively prioritize pages that
are frequently accessed.

What is Main Memory?


The main memory is central to the operation of a Modern Computer. Main Memory is a
large array of words or bytes, ranging in size from hundreds of thousands to billions.
Main memory is a repository of rapidly available information shared by the CPU and I/O
devices. Main memory is the place where programs and information are kept when the
processor is effectively utilizing them. Main memory is associated with the processor,
so moving instructions and information into and out of the processor is extremely
fast. Main memory is also known as RAM (Random Access Memory). This memory is
volatile. RAM loses its data when a power interruption occurs.

Main Memory
What is Memory Management?
In a multiprogramming computer, the Operating System resides in a part of memory,
and the rest is used by multiple processes. The task of subdividing the memory among
different processes is called Memory Management. Memory management is a method
in the operating system to manage operations between main memory and disk during
process execution. The main aim of memory management is to achieve efficient
utilization of memory.
Why Memory Management is Required?
• Allocate and de-allocate memory before and after process execution.
• To keep track of used memory space by processes.
• To minimize fragmentation issues.
• To proper utilization of main memory.
• To maintain data integrity while executing of process.
Now we are discussing the concept of Logical Address Space and Physical Address
Space
Logical and Physical Address Space
• Logical Address Space: An address generated by the CPU is known as a
“Logical Address”. It is also known as a Virtual address. Logical address space
can be defined as the size of the process. A logical address can be changed.
• Physical Address Space: An address seen by the memory unit (i.e the one
loaded into the memory address register of the memory) is commonly known as
a “Physical Address”. A Physical address is also known as a Real address. The set
of all physical addresses corresponding to these logical addresses is known as
Physical address space. A physical address is computed by MMU. The run-time
mapping from virtual to physical addresses is done by a hardware device
Memory Management Unit(MMU). The physical address always remains
constant.
Static and Dynamic Loading
Loading a process into the main memory is done by a loader. There are two different
types of loading :
• Static Loading: Static Loading is basically loading the entire program into a fixed
address. It requires more memory space.
• Dynamic Loading: The entire program and all data of a process must be in
physical memory for the process to execute. So, the size of a process is limited
to the size of physical memory. To gain proper memory utilization, dynamic
loading is used. In dynamic loading, a routine is not loaded until it is called. All
routines are residing on disk in a relocatable load format. One of the advantages
of dynamic loading is that the unused routine is never loaded. This loading is
useful when a large amount of code is needed to handle it efficiently.
Static and Dynamic Linking
To perform a linking task a linker is used. A linker is a program that takes one or more
object files generated by a compiler and combines them into a single executable file.
• Static Linking: In static linking, the linker combines all necessary program
modules into a single executable program. So there is no runtime dependency.
Some operating systems support only static linking, in which system language
libraries are treated like any other object module.
• Dynamic Linking: The basic concept of dynamic linking is similar to dynamic
loading. In dynamic linking, “Stub” is included for each appropriate library
routine reference. A stub is a small piece of code. When the stub is executed, it
checks whether the needed routine is already in memory or not. If not available
then the program loads the routine into memory.
Swapping
When a process is executed it must have resided in memory. Swapping is a process of
swapping a process temporarily into a secondary memory from the main memory,
which is fast compared to secondary memory. A swapping allows more processes to be
run and can be fit into memory at one time. The main part of swapping is transferred
time and the total time is directly proportional to the amount of memory swapped.
Swapping is also known as roll-out, or roll because if a higher priority process arrives
and wants service, the memory manager can swap out the lower priority process and
then load and execute the higher priority process. After finishing higher priority work, the
lower priority process swapped back in memory and continued to the execution
process.
swapping in memory management
Memory Management with Monoprogramming (Without Swapping)
This is the simplest memory management approach the memory is divided into two
sections:
• One part of the operating system
• The second part of the user program
Fence Register
operating system user program
• In this approach, the operating system keeps track of the first and last location
available for the allocation of the user program
• The operating system is loaded either at the bottom or at top
• Interrupt vectors are often loaded in low memory therefore, it makes sense to
load the operating system in low memory
• Sharing of data and code does not make much sense in a single process
environment
• The Operating system can be protected from user programs with the help of a
fence register.
Advantages of Memory Management
• It is a simple management approach
Disadvantages of Memory Management
• It does not support multiprogramming
• Memory is wasted
Multiprogramming with Fixed Partitions (Without Swapping)
• A memory partition scheme with a fixed number of partitions was introduced to
support multiprogramming. this scheme is based on contiguous allocation
• Each partition is a block of contiguous memory
• Memory is partitioned into a fixed number of partitions.
• Each partition is of fixed size
Example: As shown in fig. memory is partitioned into 5 regions the region is reserved for
updating the system the remaining four partitions are for the user program.
Fixed Size Partitioning
Operating System
p1
p2
p3
p4
Partition Table
Once partitions are defined operating system keeps track of the status of memory
partitions it is done through a data structure called a partition table.
Sample Partition Table
Starting Address of Partition Size of Partition Status
0k 200k allocated
200k 100k free
300k 150k free
450k 250k allocated
Logical vs Physical Address
An address generated by the CPU is commonly referred to as a logical address. the
address seen by the memory unit is known as the physical address. The logical address
can be mapped to a physical address by hardware with the help of a base register this is
known as dynamic relocation of memory references.
Contiguous Memory Allocation
The main memory should accommodate both the operating system and the different
client processes. Therefore, the allocation of memory becomes an important task in
the operating system. The memory is usually divided into two partitions: one for the
resident operating system and one for the user processes. We normally need several
user processes to reside in memory simultaneously. Therefore, we need to consider
how to allocate available memory to the processes that are in the input queue waiting to
be brought into memory. In adjacent memory allotment, each process is contained in a
single contiguous segment of memory.

Contiguous Memory Allocation


Memory Allocation
To gain proper memory utilization, memory allocation must be allocated efficient
manner. One of the simplest methods for allocating memory is to divide memory into
several fixed-sized partitions and each partition contains exactly one process. Thus, the
degree of multiprogramming is obtained by the number of partitions.
• Multiple partition allocation: In this method, a process is selected from the
input queue and loaded into the free partition. When the process terminates, the
partition becomes available for other processes.
• Fixed partition allocation: In this method, the operating system maintains a
table that indicates which parts of memory are available and which are occupied
by processes. Initially, all memory is available for user processes and is
considered one large block of available memory. This available memory is known
as a “Hole”. When the process arrives and needs memory, we search for a hole
that is large enough to store this process. If the requirement is fulfilled then we
allocate memory to process, otherwise keeping the rest available to satisfy
future requests. While allocating a memory sometimes dynamic storage
allocation problems occur, which concerns how to satisfy a request of size n
from a list of free holes. There are some solutions to this problem:
First Fit
In the First Fit, the first available free hole fulfil the requirement of the process
allocated.

First Fit
Here, in this diagram, a 40 KB memory block is the first available free hole that can store
process A (size of 25 KB), because the first two blocks did not have sufficient memory
space.
Best Fit
In the Best Fit, allocate the smallest hole that is big enough to process requirements.
For this, we search the entire list, unless the list is ordered by size.

Best Fit
Here in this example, first, we traverse the complete list and find the last hole 25KB is
the best suitable hole for Process A(size 25KB). In this method, memory utilization is
maximum as compared to other memory allocation techniques.
Worst Fit
In the Worst Fit, allocate the largest available hole to process. This method produces
the largest leftover hole.
Worst Fit
Here in this example, Process A (Size 25 KB) is allocated to the largest available memory
block which is 60KB. Inefficient memory utilization is a major issue in the worst fit.
Fragmentation
Fragmentation is defined as when the process is loaded and removed after execution
from memory, it creates a small free hole. These holes can not be assigned to new
processes because holes are not combined or do not fulfill the memory requirement of
the process. To achieve a degree of multiprogramming, we must reduce the waste of
memory or fragmentation problems. In the operating systems two types of
fragmentation:
1. Internal fragmentation: Internal fragmentation occurs when memory blocks are
allocated to the process more than their requested size. Due to this some
unused space is left over and creating an internal fragmentation
problem.Example: Suppose there is a fixed partitioning used for memory
allocation and the different sizes of blocks 3MB, 6MB, and 7MB space in
memory. Now a new process p4 of size 2MB comes and demands a block of
memory. It gets a memory block of 3MB but 1MB block of memory is a waste,
and it can not be allocated to other processes too. This is called internal
fragmentation.
2. External fragmentation: In External Fragmentation, we have a free memory
block, but we can not assign it to a process because blocks are not
contiguous. Example: Suppose (consider the above example) three processes
p1, p2, and p3 come with sizes 2MB, 4MB, and 7MB respectively. Now they get
memory blocks of size 3MB, 6MB, and 7MB allocated respectively. After
allocating the process p1 process and the p2 process left 1MB and 2MB.
Suppose a new process p4 comes and demands a 3MB block of memory, which
is available, but we can not assign it because free memory space is not
contiguous. This is called external fragmentation.
Both the first-fit and best-fit systems for memory allocation are affected by external
fragmentation. To overcome the external fragmentation problem Compaction is used. In
the compaction technique, all free memory space combines and makes one large
block. So, this space can be used by other processes effectively.
Another possible solution to the external fragmentation is to allow the logical address
space of the processes to be noncontiguous, thus permitting a process to be allocated
physical memory wherever the latter is available.
Paging
Paging is a memory management scheme that eliminates the need for a contiguous
allocation of physical memory. This scheme permits the physical address space of a
process to be non-contiguous.
• Logical Address or Virtual Address (represented in bits): An address
generated by the CPU.
• Logical Address Space or Virtual Address Space (represented in words or
bytes): The set of all logical addresses generated by a program.
• Physical Address (represented in bits): An address actually available on a
memory unit.
• Physical Address Space (represented in words or bytes): The set of all
physical addresses corresponding to the logical addresses.
Example:
• If Logical Address = 31 bits, then Logical Address Space = 231 words = 2 G words
(1 G = 230)
• If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address =
log2 227 = 27 bits
• If Physical Address = 22 bits, then Physical Address Space = 222 words = 4 M
words (1 M = 220)
• If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address
= log2 224 = 24 bits
The mapping from virtual to physical address is done by the memory management unit
(MMU) which is a hardware device and this mapping is known as the paging technique.
• The Physical Address Space is conceptually divided into several fixed-size
blocks, called frames.
• The Logical Address Space is also split into fixed-size blocks, called pages.
• Page Size = Frame Size
Let us consider an example:
• Physical Address = 12 bits, then Physical Address Space = 4 K words
• Logical Address = 13 bits, then Logical Address Space = 8 K words
• Page size = frame size = 1 K words (assumption)

Paging
The address generated by the CPU is divided into:
• Page Number(p): Number of bits required to represent the pages in Logical
Address Space or Page number
• Page Offset(d): Number of bits required to represent a particular word in a page
or page size of Logical Address Space or word number of a page or page offset.
Physical Address is divided into:
• Frame Number(f): Number of bits required to represent the frame of Physical
Address Space or Frame number frame
• Frame Offset(d): Number of bits required to represent a particular word in a
frame or frame size of Physical Address Space or word number of a frame or
frame offset.
The hardware implementation of the page table can be done by using dedicated
registers. But the usage of the register for the page table is satisfactory only if the page
table is small. If the page table contains a large number of entries then we can use
TLB(translation Look-aside buffer), a special, small, fast look-up hardware cache.
• The TLB is an associative, high-speed memory.
• Each entry in TLB consists of two parts: a tag and a value.
• When this memory is used, then an item is compared with all tags
simultaneously. If the item is found, then the corresponding value is returned.

Page Map Table


Main memory access time = m
If page table are kept in main memory,
Effective access time = m(for page table)
+ m(for particular page in page table)
What is Contiguous Memory Management?
Contiguous memory allocation is a memory allocation strategy. As the name implies,
we utilize this technique to assign contiguous blocks of memory to each task. Thus,
whenever a process asks to access the main memory, we allocate a continuous
segment from the empty region to the process based on its size. In this technique,
memory is allotted in a continuous way to the processes. Contiguous Memory
Management has two types:
• Fixed(or Static) Partition
• Variable(or Dynamic) Partitioning

Contiguous Memory Management Techniques


Below are two Contiguous Memory Management Techniques. Lets understand these in
detail.
1. Fixed Partition Scheme
In the fixed partition scheme, memory is divided into fixed number of partitions. Fixed
means number of partitions are fixed in the memory. In the fixed partition, in every
partition only one process will be accommodated. Degree of multi-programming is
restricted by number of partitions in the memory. Maximum size of the process is
restricted by maximum size of the partition. Every partition is associated with the limit
registers.
• Limit Registers: It has two limit:
• Lower Limit: Starting address of the partition.
• Upper Limit: Ending address of the partition.
Internal Fragmentation is found in fixed partition scheme. To overcome the problem of
internal fragmentation, instead of fixed partition scheme, variable partition scheme is
used.

Disadvantages Fix partition scheme


• Maximum process size <= Maximum partition size.
• The degree of multiprogramming is directly proportional to the number of
partitions.
• Internal fragmentation which is discussed above is present.
• If a process of 19kb wants to allocate and we have free space which is not
continuous we are not able to allocate the space.
2. Variable Partition Scheme
In the variable partition scheme, initially memory will be single continuous free block.
Whenever the request by the process arrives, accordingly partition will be made in the
memory. If the smaller processes keep on coming then the larger partitions will be
made into smaller partitions.
• In variable partition schema initially, the memory will be full contiguous free
block
• Memory divided into partitions according to the process size where process size
will vary.
• One partition is allocated to each active partition.
External Fragmentation is found in variable partition scheme. To overcome the problem
of external fragmentation, compaction technique is used or non-contiguous memory
management techniques are used.

Solution of External Fragmentation


1. Compaction
Moving all the processes toward the top or towards the bottom to make free available
memory in a single continuous place is called compaction. Compaction is undesirable
to implement because it interrupts all the running processes in the memory.
Disadvantage of Compaction
• Page fault can occur.
• It consumes CPU time (overhead).
2. Non-contiguous memory allocation
1. Physical address space: Main memory (physical memory) is divided into blocks
of the same size called frames. frame size is defined by the operating system by
comparing it with the size of the process.
2. Logical Address space: Logical memory is divided into blocks of the same size
called process pages. page size is defined by hardware system and these pages
are stored in the main memory during the process in non-contiguous frames.
Advantages of Variable Partition Scheme
• Portion size = process size
• There is no internal fragmentation (which is the drawback of fixed partition
schema).
• Degree of multiprogramming varies and is directly proportional to a number of
processes.
Disadvantage Variable Partition Scheme
• External fragmentation is still there.
Advantages of Contiguous Memory Management
• It’s simple to monitor how many memory blocks are still available for use, which
determines how many more processes can be allocated RAM.
• Considering that the complete file can be read from the disc in a single session,
contiguous memory allocation offers good read performance.
• Contiguous allocation is simple to set up and functions well.
Disadvantages of Contiguous Memory Management
• Fragmentation is not a problem. Since new files can be written to the disk after
older ones.
• To select the appropriate hole size while creating a new file, it needs know its
final size.
• The extra space in the holes would need to be reduced or used once the disk is
full.

Non-contiguous allocation, also known as dynamic or linked allocation, is a memory


allocation technique used in operating systems to allocate memory to processes that
do not require a contiguous block of memory. Instead of allocating a single contiguous
block of memory to a process, non-contiguous allocation allocates a series of non-
contiguous memory blocks to the process, which can be located anywhere in the
physical memory.
Fundamental approaches to implementing non-contiguous memory allocation include
paging and segmentation:
1. Paging:
• In paging, each process consists of fixed-size components called pages.
The size of a page is defined by the hardware of a computer.
• The memory is partitioned into memory areas that have the same size as
a page, and each of these memory areas is considered separately for
allocation to a page.
• Any free memory area is exactly the same size as a page, so external
fragmentation does not arise. However, internal fragmentation can occur
if the last page of a process is smaller than a full page.
2. Segmentation:
• In segmentation, a programmer identifies components called segments
in a process, such as code, data structures, or objects.
• Segmentation facilitates sharing of code, data, and program modules
among processes.
• Segments have different sizes, so the kernel must use memory reuse
techniques such as first-fit or best-fit allocation. This can lead to external
fragmentation.
Advantages of non-contiguous allocation include:
• Reduction of internal fragmentation since memory blocks can be allocated as
needed.
• Flexibility and efficiency in allocating memory to processes, as the operating
system can allocate memory wherever free memory is available.
Disadvantages of non-contiguous allocation include:
• Potential for external fragmentation, making it difficult to allocate large blocks of
memory to a process.
• Introduction of overhead due to the use of pointers to link memory blocks,
leading to slower memory allocation and deallocation times.
In non-contiguous allocation, the operating system maintains a table called the Page
Table for each process, which contains the base address of each block acquired by the
process in memory space. Different parts of a process can be allocated to different
places in main memory, allowing for spanning, which is not possible in other techniques
like dynamic or static contiguous memory allocation.
There are five types of non-contiguous allocation of memory in operating systems:
1. Paging
2. Multilevel Paging
3. Inverted Paging
4. Segmentation
5. Segmented Paging

Module V File and Device Management


Module V: File and Device Management - A Comprehensive Exploration
This module delves into the fundamental concepts of file and device management
within operating systems (OSes). It explores various file structures, access methods,
allocation techniques, device interaction mechanisms, and file protection strategies,
equipping you with a solid understanding of how OSes manage data storage and
access.
1. Types of Files:
• Definition: Types of files categorize data stored on secondary storage
devices based on their purpose, format, or characteristics.
• Related Subtopics:
• Regular Files: Also known as ordinary files, regular files store user
data or program instructions. They are the most common type of
file found in computer systems and are manipulated by users and
applications.
• Directory Files: Directory files contain information about the
organization and structure of files and directories within a file
system. They store metadata such as filenames, attributes, and
pointers to data blocks.
• Special Files: Special files represent system resources or external
devices, such as device files representing hardware devices (e.g.,
disk drives, printers) or communication channels (e.g., pipes,
sockets).
2. File Access Methods:
• Definition: File access methods define how data within a file is accessed
or retrieved by users or programs.
• Related Subtopics:
• Sequential Access: In sequential access, data is accessed in a
linear or sequential manner from the beginning to the end of the
file. Users or programs read data sequentially, starting from the
first record and progressing to subsequent records.
• Random Access: Random access allows data to be accessed
directly at any position within the file, without the need to read
data sequentially. Users or programs can read or write data at
specific byte offsets within the file, enabling efficient access to
individual records or data blocks.
3. File Allocation Methods:
• Definition: File allocation methods determine how files are stored and
organized on disk blocks or clusters.
• Related Subtopics:
• Contiguous Allocation: Contiguous allocation allocates
contiguous blocks of disk space to store files. Each file occupies a
single, contiguous sequence of disk blocks. It provides fast access
and sequential read/write operations but suffers from
fragmentation.
• Linked Allocation: Linked allocation dynamically allocates disk
space using pointers to link together consecutive blocks of data.
Each block contains a pointer to the next block in the file. It
eliminates fragmentation but may suffer from overhead and slower
access times due to traversing pointers.
• Indexed Allocation: Indexed allocation uses index blocks to store
pointers to data blocks. An index block contains pointers to
multiple data blocks, allowing direct access to any block in the file.
It provides efficient access and reduces fragmentation by
maintaining a separate index structure for each file.
4. I/O Devices:
• Definition: Input/output (I/O) devices are hardware components used to
interact with the computer system for input and output operations.
• Related Subtopics:
• Peripheral Devices: Peripheral devices are external hardware
components connected to the computer system, such as
keyboards, mice, printers, monitors, scanners, and external
storage devices.
• Storage Devices: Storage devices are hardware components used
for long-term storage of data, such as hard disk drives, solid-state
drives, optical drives, and flash drives.
5. Device Controllers:
• Definition: Device controllers are hardware components responsible for
managing communication between the computer system and I/O
devices.
• Related Subtopics:
• Interface: The interface defines the communication protocol and
data transfer methods between the device controller and the CPU
or system bus.
• Buffering: Buffering involves temporarily storing data in memory
buffers during I/O operations to accommodate variations in data
transfer rates between devices and the CPU.
6. Device Drivers:
• Definition: Device drivers are software components that enable the
operating system to communicate with hardware devices.
• Related Subtopics:
• Interrupt Handling: Device drivers handle interrupts generated by
I/O devices to signal completion of operations, errors, or the need
for attention from the CPU. Interrupt handling routines are part of
device drivers and manage interactions between devices and the
operating system.
7. Directory Structure:
• Definition: Directory structure refers to the organization and hierarchy of
directories and files within a file system.
• Related Subtopics:
• Single Level Directory: Single-level directory structures contain
all files in a single directory without any subdirectories. It is the
simplest form of directory structure but does not scale well for
organizing large numbers of files.
• Tree Structured Directory: Tree-structured directory structures
organize directories and files in a hierarchical tree-like structure,
with each directory potentially containing subdirectories and files.
It provides a more organized and scalable approach to file
organization.
• Acyclic Graph Directory: Acyclic graph directory structures allow
directories to have multiple parent directories but avoid cycles in
the directory hierarchy. This allows for more flexible organization of
files and directories.
• General Graph Directory: General graph directory structures
allow directories to have multiple parent directories, including
cycles. While less common, this type of directory structure allows
for more complex relationships between directories and files.
8. File Protection:
• Definition: File protection mechanisms control access to files and
ensure data security and integrity.
• Related Subtopics:
• Access Control Lists (ACL): Access control lists are lists
associated with files or directories that specify the permissions
granted to users or groups. They define which users or groups have
read, write, execute, or other permissions on the file or directory.
• File Permissions: File permissions are settings that specify the
actions users or groups can perform on files, such as read, write,
execute, or delete permissions. File permissions are typically
represented using permission bits or symbolic notation and are
enforced by the operating system's security mechanisms.
Module VI Shell introduction and Shell Scripting:

1. Shell and Various Types of Shell:


• Definition: A shell is a command-line interpreter that acts as an interface
between the user and the operating system. It interprets user commands
and executes them by interacting with the kernel. Various types of shells
are available in Linux, each with its features and functionalities.
• Various Types of Shells:
• Bash (Bourne Again Shell): Bash is the default shell on most Linux
distributions. It is a powerful and versatile shell that supports
scripting and interactive use. Bash is compatible with the Bourne
Shell (sh) and includes features such as command-line editing,
history, and job control.
• Zsh (Z Shell): Zsh is an extended and interactive shell that offers
advanced features such as powerful scripting capabilities,
extensive customization options, and advanced tab completion. It
provides a more user-friendly and flexible environment compared
to other shells.
• Fish (Friendly Interactive Shell): Fish is a user-friendly shell
designed for interactive use. It features auto-suggestions, syntax
highlighting, and easy-to-use scripting syntax. Fish aims to provide
a more intuitive and pleasant user experience for shell users.
• Ksh (Korn Shell): Ksh is an enhanced version of the Bourne Shell
(sh) with additional features such as command-line editing,
history, and job control. It is compatible with the POSIX standard
and provides powerful scripting capabilities.
2. Various Editors Present in Linux:
• Definition: Editors are software tools used for creating, editing, and
modifying text files in a Linux environment. Linux offers various text
editors, each with its features and capabilities to suit different user
preferences and requirements.
• Various Editors:
• Vi/Vim: Vi (Visual Editor) is a widely used text editor on Unix-like
systems. It operates in different modes, including normal mode for
navigation and command execution, insert mode for text input,
and visual mode for text selection. Vim (Vi Improved) is an
enhanced version of Vi with additional features such as syntax
highlighting, plugin support, and extensive customization options.
• Nano: Nano is a simple and user-friendly text editor designed for
basic text editing tasks. It provides a straightforward interface with
easy-to-use keyboard shortcuts for common operations such as
editing, saving, and exiting files.
• Emacs: Emacs is a powerful and extensible text editor with a wide
range of features, including syntax highlighting, code completion,
and support for various programming languages. It offers extensive
customization options and supports the integration of external
tools and packages for enhanced functionality.
3. Different Modes of Operation in vi Editor:
• Definition: The vi (Visual Editor) text editor operates in different modes,
each serving a specific purpose and allowing users to perform different
editing tasks.
• Modes of Operation:
• Normal Mode: In normal mode, vi is in the default mode where
users can navigate through the file, delete, copy, paste, and
execute commands. It is the mode where users can access most
of vi's functionalities.
• Insert Mode: In insert mode, users can insert and edit text directly
into the file. To switch to insert mode from normal mode, users can
press the i key. In insert mode, users can type text as they would in
any other text editor.
• Visual Mode: Visual mode allows users to select blocks of text for
editing or copying. To switch to visual mode from normal mode,
users can press the v key. In visual mode, users can move the
cursor to select text, which can then be copied, deleted, or
modified.
• Command-Line Mode: Command-line mode enables users to
execute commands or perform advanced operations, such as
saving changes, searching, and replacing text. To switch to
command-line mode from normal mode, users can press the : key.
In command-line mode, users can enter commands prefixed with
a colon (:) to perform various actions.
4. Shell Script:
• Definition: A shell script is a script written in a shell programming
language, such as Bash, that contains a sequence of commands and
instructions to be executed by the shell interpreter. Shell scripts
automate tasks, perform system administration tasks, and execute
complex operations.
• Writing and Executing Shell Script:
• To write a shell script, users can create a text file with the desired
commands and instructions using a text editor.
• After creating the script, users must make it executable using the
chmod command, which changes the file permissions to allow
execution: chmod +x script.sh.
• Finally, users can execute the shell script by typing its filename
preceded by ./ to specify the current directory: ./script.sh.
5. Shell Variable (User-defined and System Variables):
• Definition: Shell variables are placeholders used to store data or values
for later use within a shell script or interactive shell session. They can be
classified into user-defined variables, which are created by the user, and
system variables, which are predefined by the shell or the operating
system.
• User-defined Variables: User-defined variables are created by users to
store custom data or values. They can be assigned values using the
assignment operator (=) followed by the desired value.
• System Variables: System variables are predefined variables that store
information about the system environment or configuration. They are
typically set by the shell or the operating system and are accessible to all
processes running in the system.
6. System Calls:
• Definition: System calls are functions provided by the operating system
kernel that enable applications to interact with the operating system's
services and resources, such as file operations, process management,
and inter-process communication.
• Using System Calls:
• Applications can use system calls to request services or perform
operations that require privileged access or interaction with the
underlying hardware.
• System calls are typically invoked through wrapper functions
provided by the C standard library, such as libc in Unix-like
systems. These wrapper functions abstract the low-level details of
system call invocation and provide a more user-friendly interface
for application developers.
7. Pipes and Filters:
• Definition: Pipes and filters are mechanisms for connecting the output of
one command or process to the input of another command or process,
enabling the creation of complex data processing pipelines.
• Usage:
• Pipes use the | operator to redirect the output of one command as
input to another command. For example: command1 |
command2.
• Filters are commands that accept input from standard input
(stdin), process it, and produce output to standard output (stdout).
Examples of filters include grep, sort, awk, and sed.
Module-V
File
A file is a named collection of related information that is recorded on secondary storage such
as magnetic disks, magnetic tapes and optical disks. In general, a file is a sequence of bits,
bytes, lines or records whose meaning is defined by the files creator and user.
The name of the file is divided into two parts as shown below:
• name
• extension, separated by a period.

File Type
File type refers to the ability of the operating system to distinguish different types of file such
as text files source files and binary files etc.
Ordinary files
• These are the files that contain user information.
• These may have text, databases or executable program.
• The user can apply various operations on such files like add, modify, delete or even
remove the entire file.
Directory files
• These files contain list of file names and other information related to these files.
Special files
• These files are also known as device files.
• These files represent physical device like disks, terminals, printers, networks, tape drive
etc.
These files are of two types −
• Character special files − data is handled character by character as in case of terminals
or printers.
• Block special files − data is handled in blocks as in the case of disks and tapes.
File Access Mechanisms
File access mechanism refers to the manner in which the records of a file may be accessed.
There are several ways to access files −
• Sequential access
• Direct/Random access
• Indexed sequential access
Sequential access
A sequential access is that in which the records are accessed in some sequence, i.e., the
information in the file is processed in order, one record after the other. This access method is
the most primitive one. Example: Compilers usually access files in this fashion.
Direct/Random access
• Random access file organization provides, accessing the records directly.
• Each record has its own address on the file with by the help of which it can be directly
accessed for reading or writing.
• The records need not be in any sequence within the file and they need not be in adjacent
locations on the storage medium.
Indexed sequential access
• This mechanism is built up on base of sequential access.
• An index is created for each file which contains pointers to various blocks.
• Index is searched sequentially and its pointer is used to access the file directly.

Space Allocation

Files are allocated disk spaces by operating system. Operating systems deploy following
three main ways to allocate disk space to files.
• Contiguous Allocation
• Linked Allocation
• Indexed Allocation
Contiguous Allocation
• Each file occupies a contiguous address space on disk.
• Assigned disk address is in linear order.
• Easy to implement.
• External fragmentation is a major issue with this type of allocation technique.

Linked Allocation
• Each file carries a list of links to disk blocks.
• Directory contains link / pointer to first block of a file.
• No external fragmentation
• Effectively used in sequential access file.
• Inefficient in case of direct access file.
Indexed Allocation
• Provides solutions to problems of contiguous and linked allocation.
• A index block is created having all pointers to files.
• Each file has its own index block which stores the addresses of disk space occupied
by the file.
• Directory contains the addresses of index blocks of files.

Module-V

Directory Structure

A directory is a container that is used to contain folders and files. It organizes files and
folders in a hierarchical manner.

There are several logical structures of a directory, these are given below.

• Single-level directory –
The single-level directory is the simplest directory structure. In it, all files are
contained in the same directory which makes it easy to support and understand.

A single level directory has a significant limitation, however, when the number of
files increases or when the system has more than one user. Since all the files are in the
same directory, they must have a unique name. if two users call their dataset test, then
the unique name rule violated.
Advantages:

• Since it is a single directory, so its implementation is very easy.


• If the files are smaller in size, searching will become faster.
• The operations like file creation, searching, deletion, updating are very easy in such a
directory structure.
• Logical Organization: Directory structures help to logically organize files and
directories in a hierarchical structure. This provides an easy way to navigate and
manage files, making it easier for users to access the data they need.
• Increased Efficiency: Directory structures can increase the efficiency of the file
system by reducing the time required to search for files. This is because directory
structures are optimized for fast file access, allowing users to quickly locate the file
they need.
• Improved Security: Directory structures can provide better security for files by
allowing access to be restricted at the directory level. This helps to prevent
unauthorized access to sensitive data and ensures that important files are protected.
• Facilitates Backup and Recovery: Directory structures make it easier to backup and
recover files in the event of a system failure or data loss. By storing related files in the
same directory, it is easier to locate and backup all the files that need to be protected.
• Scalability: Directory structures are scalable, making it easy to add new directories
and files as needed. This helps to accommodate growth in the system and makes it
easier to manage large amounts of data.

Disadvantages:

• There may chance of name collision because two files can have the same name.
• Searching will become time taking if the directory is large.
• This can not group the same type of files together.

Two-level directory –
As we have seen, a single level directory often leads to confusion of files names among
different users. the solution to this problem is to create a separate directory for each user.

In the two-level directory structure, each user has their own user files directory
(UFD). The UFDs have similar structures, but each lists only the files of a single user.
system’s master file directory (MFD) is searched whenever a new user id is Correct.
I/O Device

One of the important jobs of an Operating System is to manage various I/O devices including
mouse, keyboards, touch pad, disk drives, display adapters, USB devices, Bit-mapped screen,
LED, Analog-to-digital converter, On/off switch, network connections, audio I/O, printers etc.
An I/O system is required to take an application I/O request and send it to the physical device,
then take whatever response comes back from the device and send it to the application. I/O
devices can be divided into two categories −
• Block devices − A block device is one with which the driver communicates by sending
entire blocks of data. For example, Hard disks, USB cameras, Disk-On-Key etc.
• Character devices − A character device is one with which the driver communicates by
sending and receiving single characters (bytes, octets). For example, serial ports,
parallel ports, sounds cards etc

Device Controllers
Device drivers are software modules that can be plugged into an OS to handle a particular
device. Operating System takes help from device drivers to handle all I/O devices.
The Device Controller works like an interface between a device and a device driver. I/O units
(Keyboard, mouse, printer, etc.) typically consist of a mechanical component and an electronic
component where electronic component is called the device controller.
There is always a device controller and a device driver for each device to communicate with
the Operating Systems. A device controller may be able to handle multiple devices. As an
interface its main task is to convert serial bit stream to block of bytes, perform error correction
as necessary.
Any device connected to the computer is connected by a plug and socket, and the socket is
connected to a device controller. Following is a model for connecting the CPU, memory,
controllers, and I/O devices where CPU and device controllers all use a common bus for
communication.

Synchronous vs asynchronous I/O


• Synchronous I/O − In this scheme CPU execution waits while I/O proceeds
• Asynchronous I/O − I/O proceeds concurrently with CPU execution
Communication to I/O Devices
The CPU must have a way to pass information to and from an I/O device. There are three
approaches available to communicate with the CPU and Device.
• Special Instruction I/O
• Memory-mapped I/O
• Direct memory access (DMA)
Special Instruction I/O
This uses CPU instructions that are specifically made for controlling I/O devices. These
instructions typically allow data to be sent to an I/O device or read from an I/O device.
Memory-mapped I/O
When using memory-mapped I/O, the same address space is shared by memory and I/O
devices. The device is connected directly to certain main memory locations so that I/O device
can transfer block of data to/from memory without going through CPU.

While using memory mapped IO, OS allocates buffer in memory and informs I/O device to use
that buffer to send data to the CPU. I/O device operates asynchronously with CPU, interrupts
CPU when finished.
The advantage to this method is that every instruction which can access memory can be used
to manipulate an I/O device. Memory mapped IO is used for most high-speed I/O devices like
disks, communication interfaces.
Direct Memory Access (DMA)
Slow devices like keyboards will generate an interrupt to the main CPU after each byte is
transferred. If a fast device such as a disk generated an interrupt for each byte, the operating
system would spend most of its time handling these interrupts. So a typical computer uses
direct memory access (DMA) hardware to reduce this overhead.
Direct Memory Access (DMA) means CPU grants I/O module authority to read from or write
to memory without involvement. DMA module itself controls exchange of data between main
memory and the I/O device. CPU is only involved at the beginning and end of the transfer and
interrupted only after entire block has been transferred.
Direct Memory Access needs a special hardware called DMA controller (DMAC) that manages
the data transfers and arbitrates access to the system bus. The controllers are programmed with
source and destination pointers (where to read/write the data), counters to track the number of
transferred bytes, and settings, which includes I/O and memory types, interrupts and states for
the CPU cycles.
Directory Structure

A directory is a container that is used to contain folders and files. It organizes files and
folders in a hierarchical manner.

There are several logical structures of a directory, these are given below.

• Single-level directory –
The single-level directory is the simplest directory structure. In it, all files are
contained in the same directory which makes it easy to support and understand.

A single level directory has a significant limitation, however, when the number of
files increases or when the system has more than one user. Since all the files are in the
same directory, they must have a unique name. if two users call their dataset test, then
the unique name rule violated.
Advantages:

• Since it is a single directory, so its implementation is very easy.


• If the files are smaller in size, searching will become faster.
• The operations like file creation, searching, deletion, updating are very easy in such a
directory structure.
• Logical Organization: Directory structures help to logically organize files and
directories in a hierarchical structure. This provides an easy way to navigate and
manage files, making it easier for users to access the data they need.
• Increased Efficiency: Directory structures can increase the efficiency of the file
system by reducing the time required to search for files. This is because directory
structures are optimized for fast file access, allowing users to quickly locate the file
they need.
• Improved Security: Directory structures can provide better security for files by
allowing access to be restricted at the directory level. This helps to prevent
unauthorized access to sensitive data and ensures that important files are protected.
• Facilitates Backup and Recovery: Directory structures make it easier to backup and
recover files in the event of a system failure or data loss. By storing related files in the
same directory, it is easier to locate and backup all the files that need to be protected.
• Scalability: Directory structures are scalable, making it easy to add new directories
and files as needed. This helps to accommodate growth in the system and makes it
easier to manage large amounts of data.

Disadvantages:

• There may chance of name collision because two files can have the same name.
• Searching will become time taking if the directory is large.
• This can not group the same type of files together.

Two-level directory –
As we have seen, a single level directory often leads to confusion of files names among
different users. the solution to this problem is to create a separate directory for each user.

In the two-level directory structure, each user has their own user files directory
(UFD). The UFDs have similar structures, but each lists only the files of a single user.
system’s master file directory (MFD) is searched whenever a new user id is Correct.
3) Tree Structure/ Hierarchical Structure:
Tree directory structure of operating system is most commonly used in our personal
computers. User can create files and subdirectories too, which was a disadvantage in the
previous directory structures.
This directory structure resembles a real tree upside down, where the root directory is at
the peak. This root contains all the directories for each user. The users can create
subdirectories and even store files in their directory.

Acyclic-Graph Structured Directories


The tree structured directory system doesn't allow the same file to exist in multiple
directories therefore sharing is major concern in tree structured directory system. We can
provide sharing by making the directory an acyclic graph. In this system, two or more
directory entry can point to the same file or sub directory. That file or sub directory is shared
between the two directory entries.

You might also like