Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

UNIT-3

1. a) What is Deadlock and what are its four necessary conditions? Specify the difference between Deadlock and
Starvation.

A) Deadlock: In a multiprogramming environment, several processes may compete for a finite number of
resources. A process requests resources; and if the resources are not available at that time, the process enters a
waiting state. Sometimes, a waiting process is never again able to change state, because the resources it has
requested are held by other waiting processes. This situation is called a deadlock.

Necessary Conditions: A deadlock situation can arise if the following four conditions hold simultaneously in a
system:

1. Mutual exclusion: At least one resource must be held in a non-sharable mode; that is, only one process at a time
can use the resource. If another process requests that resource, the requesting process must be delayed until the
resource has been released.

2. Hold and wait: A process must be holding at least one resource and waiting to acquire additional resources that
are currently being held by other processes.

3. No preemption: Resources cannot be preempted.; that is, a resource can be released only voluntarily by the
process holding it, after that process has completed its task.

4. Circular wait: A set {P0, P0, ..., Pn} of waiting processes must exist such that P0 is waiting for a resource held by
P1, P1 is waiting for a resource held by P2, •••, Pn-1 is waiting for a resource held by Pn, and Pn, is waiting for a
resource held by P0.

2. a) Discuss various techniques to recover from the deadlock.

A) Recovery from Deadlock: When a detection algorithm determines that a deadlock exists, several alternativs are
available to recover from deadlock. One possibility is to inform the operator that a deadlock has occurred and to let
the operator deal with the deadlock manually. Another possibility is to let the system recover from the deadlock
automatically. There are two options for breaking a deadlock One is simply to abort one or more processes to break
the circular wait. The other is to preempt some resources from one or more of the deadlocked processes.
1. Process Termination:

•Abort all deadlocked processes: This method clearly will break the deadlock cycle, but at great expense;
the deadlocked processes may have computed for long time, and result of these partial computations must be
discarded and probably will have to be recomputed later.

• Abort one process at a time until the deadlock cycle is eliminated: This method incurs considerable
overhead, since, after each process is aborted, a deadlock detection algorithm must be invoked to determine
whether any processes are still deadlocked.

2. Resource Preemption: To eliminate deadlocks using resource preemption, successively preempt some
resources from processes and give these resources to othee processes until the deadlock cycle is broken.

3. a) In what way resource allocation graphs are used for detection of deadlocks? Write the algorithm.

A) Deadlock detection: If a system does not empliy either a deadlock prevention or a deadlock avoidance
algorithm, then a deadlock situation may occur. In this environment, the system must provide:

• An algorithm that examines the state of the system to determine whether a deadlock has occurred.

• An algorithm to recover from the deadlock.

The following are different types of deadlock detection algorithms:

1. Single Instance of Each Resource Type (Wait-For Graph): Wait-For graph is an sibling of Resource-
Allocation Graph algorithm. More precisely, an edge from Pi to Pj in a Wait-For graph implies that process
Pi is waiting for process Pj to release a resource that Pi needs. An edge Pi → Pj exists in a wait-fpr graph if
and only the corresponding resource allocation graph contains two edges Pi → Rj and Rj → Pj for more
resource. For example,

A deadlock exists in the system if and only if the wait-for graph contains a cycle. To detect deadlocks, the
system needs to maintain the graph and periodically invoke an algorithm that’s reaches for a cycle in the
graph. An algorithm to detect a cycle in a graph requires an order of n 2 operations, where n is the number
of vertices in the graph.
Several Instances of resource Type (Banker’s Algorithm) : The Wait-For graph is not applicable to a
resource allocation system with multiple instances of each resource type. The algorithm employs several
time varying data structures that are similar to those used in the banker’s algorithm.
• Available : A vector of length m indicates the number of available resources of each type.
• Allocation : An n * m matrix defines the number of resources of each type currently allocated to each
process.
• Request : An n * m matrix indicates the current request of each process. If Request[i][j] equals k, then
process Pi is requesting k more instances of resource type Rj.
Algorithm:
1. Let Work and Finish be vectors of length m and n, respectively.
Initialize Work= Available. For i= 0, 1, ... , n-1, if Allocation # 0, then
Finish[i] =false; otherwise, Finish[i] = true.
2. Find an index i such that both
Finish[i] ==false
Request< Work
If no such i exists, go to step 4.
3. Work= Work+ Allocation
Finish[i] = true
Go to step 2.
4. If Finish[i] ==false for some i, 0 < i < n, then the system is in a deadlocked state. Moreover, if Finish[i]
==false, then process Pi is deadlocked.
This algorithm requires an order of m x n2 operations to detect whether the system is in a deadlocked
state. Consider a system with 5 processes P0 through P4 and three resource types A, B and C. Resource
type A has 7 instances, B has two instances, and C has 6 instances. Suppose that, at time T0, we have the
following resource allocation state.

For above request algorithm leaves system in safe state with safe sequence <P0, P2, P3, P4, P1>. Suppose
now that process P2 makes one additional request for an instance of type C. The Request matrix is
modified as follows, which leaves the system in unsafe state.

UNIT-4
4. a) Differentiate between Logical and Physical address space. Explain the process of converting virtual
addresses to physical addresses with a neat diagram.

A)

Converting Virtual Addresses to Physical Addresses

The process of converting virtual addresses to physical addresses is known as address translation. This is typically
handled by the Memory Management Unit (MMU) using page tables. Here's an outline of the process, followed by
a neat diagram:

1. Logical Address (Virtual Address): The CPU generates a logical address (also called a virtual address) when
a program is executed.

2. Page Table: The logical address is divided into a page number and an offset. The page number is used to
index into a page table, which contains the base address of each page in physical memory.

3. Translation: The base address from the page table is combined with the offset from the logical address to
form the physical address.

4. Physical Address: The physical address is used to access the actual location in main memory (RAM).

Steps in Detail:

1. Logical Address:

o Suppose a logical address is represented as LA = P + D, where P is the page number and D is the
offset within the page.

2. Page Table Lookup:


o The page number P is used to look up the corresponding frame number in the page table.

o Page Table Entry (PTE) provides the frame number F.

3. Form Physical Address:

o The physical address ‘PA’ is then constructed as ‘PA = F + D’, where ‘F’ is the frame number from
the ‘PTE’ and ‘D’ is the offset from the logical address.

b) What is the need for page replacement in paging? Describe any two page replacement algorithms with
examples.

A) The Need for Page Replacement in Paging

Paging is a memory management scheme that eliminates the need for contiguous allocation of physical memory
and thus avoids fragmentation issues. In paging, the logical memory is divided into fixed-size blocks called pages,
and the physical memory is divided into blocks of the same size called frames.

Page Replacement is needed when a program tries to access a page that is not currently in physical memory. If the
memory is already full, one of the pages currently in memory needs to be replaced to make space for the new
page. This situation is called a page fault. The primary reasons for page replacement are:

1. Limited Physical Memory: Physical memory is limited and cannot hold all pages of all running processes
simultaneously.

2. Efficient Memory Utilization: To ensure that the most frequently used pages are kept in memory,
enhancing the performance of the system.

3. Multiprogramming: Multiple programs running simultaneously require their pages to be loaded into
memory, necessitating the replacement of less frequently used pages.

Page Replacement Algorithms

Two common page replacement algorithms are FIFO (First-In-First-Out) and LRU (Least Recently Used).

1. FIFO (First-In-First-Out) Page Replacement Algorithm


FIFO is the simplest page replacement algorithm. In FIFO, the oldest page in memory is the one that gets
replaced. The algorithm uses a queue to keep track of the order of pages loaded into memory.

Example:

 Assume the reference string (sequence of pages accessed) is: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3

 Assume there are 3 frames available in physical memory.

Total Page Faults: 9

2. LRU (Least Recently Used) Page Replacement Algorithm

LRU replaces the page that has not been used for the longest period of time. It uses the past knowledge of page
accesses to predict which page will not be used in the near future. This algorithm can be implemented using a stack
or counters for each page.

Example:

Assume the reference string (sequence of pages accessed) is: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3

Assume there are 3 frames available in physical memory.


Total Page Faults: 9

5a) Discuss the process of Swapping in memory management with neat schematic view.

A)Swapping: A process must be in memory to be executed. A process, however, can be swapped temporarily out of
memory to a backing store and then brought back into memory for continued execution. For example, assume a
multiprogramming environment with a Round-Robin CPU scheduling algorithm. When a quantum expires, the
memory manager will start to swap out the process that just finished and to swap another process into the
memory space that has been freed. In the meantime, the CPU scheduler will allocate a time slice to some other
process in memory. A variant of this swapping policy is used for priority-based scheduling algorithms also. If a
higher-priority process arrives and wants service, the memory manager can swap out the lower-priority process
and then load and execute the higher-priority process. When the higher-priority process finishes, the lower-priority
process can be swapped back in and continued. This variant of swapping is sometimes called roll out, roll in.

Normally, a process that is swapped out will be swapped back into the same memory space it occupied previously.
This depends on the method of address binding. If binding is done at assembly or load time, then the process
cannot be easily moved to a different location. If execution-time binding is being used, however, then a process can
be swapped into a different memory space. Swapping requires a backing store. The backing store is commonly a
fast disk. Whenever the CPU scheduler decides to execute a process, it calls the dispatcher. The dispatcher checks
to see whether the next process in the queue is in memory. If it is not, and if there is no free memory region, the
dispatcher swaps out a process currently in memory and swaps in the desired process. It then reloads registers and
transfers control to the selected process. The context-switch time in such a swapping system is fairly high. If we
want to swap a process, we must be sure that it is completely idle.

b) Explore the most common techniques for structuring the page table

A) Page tables are essential components of the virtual memory system, mapping virtual addresses to physical
addresses. Given the potentially large size of the virtual address space, several techniques have been developed to
structure page tables efficiently. Here are the most common techniques:

1. Single-Level Page Table

In a single-level page table, there is a straightforward mapping between virtual addresses and physical addresses.
This structure can be inefficient if the virtual address space is large, as it requires a large contiguous memory area
to store the page table entries (PTEs).

Pros:

 Simple and easy to implement.

 Fast address translation if the entire page table fits into memory.

Cons:

 Inefficient memory usage, especially with sparse address spaces.

 Not scalable for large address spaces (e.g., 64-bit systems).

2. Multi-Level Page Table

Multi-level page tables (e.g., two-level, three-level) break down the page table into multiple smaller tables,
reducing memory overhead for sparse address spaces.

Two-Level Page Table Example:

1. Virtual Address Breakdown: A virtual address is divided into multiple parts. For a two-level page table, it
might be divided into:

o First-level index

o Second-level index

o Offset

2. First-Level Table: The first-level index is used to index into the first-level page table, which points to a
second-level page table.

3. Second-Level Table: The second-level index is used to index into the second-level page table, which points
to the actual physical frame.

Pros:

 Reduces memory usage by only allocating space for the portions of the page table that are in use.

 More scalable for larger address spaces.

Cons:
 Slightly more complex and may increase the time for address translation due to multiple memory
accesses.

3. Inverted Page Table

An inverted page table has one entry for each physical page frame, rather than each virtual page, reducing the
memory required to store the page table. It includes information about which virtual page maps to a particular
physical page.

Pros:

 Significant reduction in the size of the page table.

 Efficient for systems with large physical memory.

Cons:

 More complex address translation since it involves searching the page table.

 Often requires associative (hashing) techniques to quickly locate the page table entry.

4. Hashed Page Table

In hashed page tables, a hash table is used to handle address translations. The virtual page number is hashed to get
the index into the hash table, which contains pointers to the page table entries.

Pros:

 Efficient handling of sparse address spaces.

 Suitable for large address spaces.

Cons:

 Potential for collisions in the hash table.

 Additional overhead for hashing and handling collisions.

5. Hierarchical Page Table

Hierarchical page tables are an extension of multi-level page tables. They can involve multiple levels (more than
two) to further break down the address space.

Example (Three-Level Page Table):

1. Virtual Address Breakdown: A virtual address might be divided into:

o First-level index

o Second-level index

o Third-level index

o Offset

2. Hierarchy: Each index points to a respective level page table until the final level points to the physical
frame.

Pros:
 Efficient memory usage for very large address spaces.

 Provides a more granular control over memory allocation.

Cons:

 Increased complexity and additional overhead for address translation.

 Multiple memory accesses may increase latency.

6. Segmented Page Table

Segmented paging combines segmentation and paging. The virtual address is divided into segments, and each
segment has its page table.

Example:

1. Virtual Address Breakdown: A virtual address might include:

o Segment number

o Page number

o Offset

2. Segmentation: The segment number indexes into a segment table that provides the base address of the
segment's page table.

3. Paging: The page number and offset are used to perform standard paging within the segment.

Pros:

 Combines the benefits of both segmentation (logical organization) and paging (efficient memory use).

 Can handle both large and sparse address spaces efficiently.

Cons:

 More complex memory management.

 Address translation requires multiple steps.

6a) Explain the working of Demand Paging technique. Discuss the hardware support required to support demand
paging

A) Demand Paging

Demand Paging is a memory management technique in which pages of data are only loaded into memory when
they are needed, or "demanded," by a program, rather than being loaded all at once. This approach allows for
more efficient use of memory and can significantly reduce the time it takes to start a program.

How Demand Paging Works

1. Page Request:

o When a program needs to access a page, it issues a request.

o The requested page is identified by its virtual address.


2. Page Table Check:

o The operating system (OS) checks the page table to see if the page is already in physical memory
(RAM).

o The page table entry (PTE) contains a bit called the "present bit" or "valid bit" that indicates
whether the page is in memory.

3. Page Fault:

o If the present bit is not set (the page is not in memory), a page fault occurs.

o The OS must then bring the page into memory from secondary storage (e.g., disk).

4. Page Loading:

o The OS locates the required page on disk.

o It selects a free frame in physical memory (or uses a page replacement algorithm to free up a
frame if necessary).

o The page is read from disk into the selected frame.

5. Page Table Update:

o The page table is updated with the frame number where the page is loaded.

o The present bit is set to indicate that the page is now in memory.

6. Instruction Restart:

o The instruction that caused the page fault is restarted, now accessing the required page in
memory.

Hardware Support for Demand Paging

To support demand paging efficiently, certain hardware features are required:

1. Page Table:

o Maintains the mapping between virtual addresses and physical addresses.

o Contains information such as the frame number, present bit, dirty bit, and access control bits.

2. Memory Management Unit (MMU):

o Handles the translation of virtual addresses to physical addresses.

o Uses the page table to perform address translation.

o Detects page faults when a page is not present in memory.

3. Secondary Storage:

o Typically a hard disk or solid-state drive (SSD) used to store pages that are not currently in
memory.

o Must be fast enough to load pages into memory with minimal delay.
4. Page Fault Handler:

o A component of the OS that is invoked when a page fault occurs.

o Responsible for loading the required page from disk into memory.

o Updates the page table and restarts the faulting instruction.

5. Dirty Bit:

o Indicates if a page has been modified while in memory.

o Helps the OS decide whether a page needs to be written back to disk when it is replaced.

6. TLB (Translation Lookaside Buffer):

o A cache used to store recently used page table entries.

o Reduces the time needed for address translation by avoiding repeated lookups in the main page
table.

Example of Demand Paging

Let's illustrate demand paging with a simple example:

 Scenario: A program needs to access page 3, but page 3 is not in memory.

1. Page Request:

o The program requests to read data from page 3.

2. Page Table Check:

o The MMU checks the page table and finds that the present bit for page 3 is not set.

3. Page Fault:

o A page fault occurs because page 3 is not in memory.

4. Page Loading:

o The OS locates page 3 on the disk.

o It selects a free frame (or uses a page replacement algorithm to free up a frame).

o Page 3 is loaded into the selected frame in memory.

5. Page Table Update:

o The page table is updated to reflect the new location of page 3.

o The present bit for page 3 is set.

6. Instruction Restart:

o The instruction that caused the page fault is restarted and now successfully accesses page 3
in memory.

Advantages of Demand Paging


 Efficient Memory Utilization: Only the needed pages are loaded into memory, reducing memory
usage.

 Faster Program Startup: Programs can start running without having to load all their pages into
memory first.

 Scalability: Supports larger virtual address spaces than physical memory.

Disadvantages of Demand Paging

 Page Fault Overhead: Frequent page faults can degrade performance.

 Latency: Loading pages from disk can introduce delays.

 Complexity: Requires sophisticated hardware and OS support.

b) Under what circumstances do page faults occur? Describe the actions taken by the operating
system when a page fault occurs with neat picture.
A) Circumstances Leading to Page Faults
A page fault occurs when a program tries to access a page that is not currently loaded into physical
memory (RAM). The following are common circumstances under which page faults occur:
1. First Access: When a program accesses a page for the first time.
2. Page Replacement: When a page has been swapped out to secondary storage to make room for
another page.
3. Memory-Mapped File Access: When accessing a portion of a file that is mapped into the virtual
address space but not yet loaded.
4. Shared Pages: When a program accesses a page that is shared between processes but not yet
loaded by the current process.
5. Stack and Heap Growth: When the stack or heap grows beyond the pages that are currently
mapped into memory.
Actions Taken by the Operating System on a Page Fault
Access to a page marked invalid causes a page-fault trap. This trap is the result of the
operating system's failure to bring the desired page into memory. The procedure for handling
this page fault is as shown below.
First check whether the reference was a valid or an invalid memory access.
If the reference was invalid, terminate the process. If it was valid, but not yet brought
in that page, now page it in.
Find a free frame
Schedule a disk operation to read the desired page into the newly allocated frame.
When the disk read is complete, modify the internal table kept with the
process and the page table to indicate that the page is now in memory.
Restart the instruction that was interrupted by the trap. The process can now access
the page as though it had always been in memory.
A crucial requirement for demand paging is the need to be able to restart any instruction after a page
fault. A page fault may occur at any memory reference. If the page fault occurs on the instruction fetch,
restart by fetching the instruction again. If a page fault occurs during fetching an operand, fetch and
decode the instruction again and then fetch the operand.

7. a) What is the cause of thrashing? How does the system detect thrashing? Once it detects
thrashing, what can the system do to eliminate this problem?
A) Thrashing in operating systems occurs when the system spends more time swapping data between memory and
disk (paging) than executing actual tasks. This situation severely degrades system performance because the CPU is
spending too much time on managing page swaps rather than executing useful work.

Causes of Thrashing:

Thrashing is primarily caused by:

1. Overloading of the System: When the system is running more processes than it can handle with its
available physical memory (RAM), it constantly needs to swap out pages to disk to free up memory for
active processes.

2. Insufficient Memory: If the system does not have enough physical memory to support the workload, it
resorts to frequent paging.

Detection of Thrashing:

Modern operating systems detect thrashing using various methods, including:

 Monitoring Page Fault Rate: A significant increase in the rate of page faults indicates that the system is
accessing pages from disk frequently, which could suggest thrashing.
 Low CPU Utilization: Paradoxically, thrashing can lead to low CPU utilization because the CPU is spending
more time waiting for I/O operations to complete (due to paging).

 High Disk I/O: Increased disk I/O operations without a corresponding increase in useful work by the CPU
can indicate thrashing.

Dealing with Thrashing:

Once thrashing is detected, the system can take several measures to mitigate or eliminate the problem:

1. Reduce the Degree of Multiprogramming: Limit the number of processes or tasks running concurrently to
reduce memory contention. This can be done dynamically by suspending or terminating non-essential
processes.

2. Adjust Memory Allocation: Allocate more physical memory (if possible) to increase the available RAM for
active processes. This can involve adding more RAM to the system or adjusting memory allocation policies.

3. Use of Working Set Model: Implementing a working set model where each process is allocated the
minimum set of pages it needs to execute efficiently can help prevent excessive paging.

4. Optimize Page Replacement Algorithms: Ensure that the page replacement algorithm used by the
operating system (e.g., LRU - Least Recently Used) is efficient and suitable for the workload to minimize
unnecessary page swaps.

5. Prioritize Processes: Give priority to critical processes or those with high user interaction to ensure they
have enough memory allocated to prevent thrashing.

6. Monitoring and Proactive Measures: Continuously monitor system performance metrics related to
memory usage, page faults, and CPU utilization to detect thrashing early. Implement proactive measures
to adjust system parameters dynamically based on workload changes.

b) Consider the page reference string 1,2,3,4,5,3,4,1,6,7,8,7,8,9,7,8,9,5,4,5,4,2 With Five Frames. How many
page faults would occur for the FIFO, Optimal page replacement algorithms?

A) FIFO Page Replacement Algorithm: (Follow Mam Told Process )

Let's go through the string and count the page faults:

 Page 1: Fault (1) -> [1]

 Page 2: Fault (2) -> [1, 2]

 Page 3: Fault (3) -> [1, 2, 3]

 Page 4: Fault (4) -> [1, 2, 3, 4]

 Page 5: Fault (5) -> [1, 2, 3, 4, 5]

 Page 3: Hit -> [1, 2, 3, 4, 5]

 Page 4: Hit -> [1, 2, 3, 4, 5]

 Page 1: Hit -> [1, 2, 3, 4, 5]

 Page 6: Fault (6) -> [2, 3, 4, 5, 6]

 Page 7: Fault (7) -> [3, 4, 5, 6, 7]


 Page 8: Fault (8) -> [4, 5, 6, 7, 8]

 Page 7: Hit -> [4, 5, 6, 7, 8]

 Page 8: Hit -> [4, 5, 6, 7, 8]

 Page 9: Fault (9) -> [5, 6, 7, 8, 9]

 Page 7: Hit -> [5, 6, 7, 8, 9]

 Page 8: Hit -> [5, 6, 7, 8, 9]

 Page 9: Hit -> [5, 6, 7, 8, 9]

 Page 5: Fault (5) -> [6, 7, 8, 9, 5]

 Page 4: Fault (4) -> [7, 8, 9, 5, 4]

 Page 5: Hit -> [7, 8, 9, 5, 4]

 Page 4: Hit -> [7, 8, 9, 5, 4]

 Page 2: Fault (2) -> [8, 9, 5, 4, 2]

Total page faults for FIFO: 15

Optimal Page Replacement Algorithm:

 Page 1: Fault (1) -> [1]

 Page 2: Fault (2) -> [1, 2]

 Page 3: Fault (3) -> [1, 2, 3]

 Page 4: Fault (4) -> [1, 2, 3, 4]

 Page 5: Fault (5) -> [1, 2, 3, 4, 5]

 Page 3: Hit -> [1, 2, 3, 4, 5]

 Page 4: Hit -> [1, 2, 3, 4, 5]

 Page 1: Hit -> [1, 2, 3, 4, 5]

 Page 6: Fault (6) -> [2, 3, 4, 5, 6]

 Page 7: Fault (7) -> [2, 3, 4, 5, 7]

 Page 8: Fault (8) -> [2, 3, 4, 5, 8]

 Page 7: Hit -> [2, 3, 4, 5, 8]

 Page 8: Hit -> [2, 3, 4, 5, 8]

 Page 9: Fault (9) -> [2, 3, 4, 5, 9]

 Page 7: Hit -> [2, 3, 4, 5, 9]

 Page 8: Hit -> [2, 3, 4, 5, 9]


 Page 9: Hit -> [2, 3, 4, 5, 9]

 Page 5: Fault (5) -> [2, 3, 4, 5, 9]

 Page 4: Fault (4) -> [2, 3, 4, 5, 9]

 Page 5: Hit -> [2, 3, 4, 5, 9]

 Page 4: Hit -> [2, 3, 4, 5, 9]

 Page 2: Hit -> [2, 3, 4, 5, 9]

Total page faults for Optimal: 12

8. a) Explain different File Attributes, File Operations, File Types and File structures.

A) File attributes: File attributes are settings associated with computer files that grant or deny certain rights to how
a user or the operating system can access that file. A file is named, for the convenience of its human users, and is
referred to by its name. A name is usually a string of characters, such as example.c. Following are some of the
attributes of a file :

 Name: The symbolic file name is the only information kept in human readable form.
 Identifier: This unique tag, usually a number, identifies the file within the file system; it is the non-human-
readable name for the file.
 Type: This information is needed for systems that support different types of files.
 Location: This information is a pointer to a device and to the location of the file on that device.
 Size: The current size of the file (in bytes, words, or blocks) and possibly the maximum allowed size are
included in this attribute.
 Protection: Access-control information determines who can do reading, writing, executing, and so on.
 Time, date, and user identification: This information may be kept for creation, last modification, and last
use. These data can be useful for protection, security, and usage monitoring.

File Operations: The OS provides systems calls to create, write, read, reset, and delete files. The following are the
specific duties an OS must do for each of the five basic file operations.

 Creating a file: Two steps are necessary to create a file. First, space in the file system must be found for
the file. Second, an entry for the new file must be made in the directory.
 Writing a file: To write a file, make a system call specifying both the name of the file and the information
to be written to the file. Given the name of the file, the system searches the directory to find the file's
location. The system must keep a write pointer to the location in the file where the next write is to take
place. The write pointer must be updated whenever a write occurs.
 Reading a file: To read from a file, use a system call that specifies the name of the file and where (in
memory) the next block of the file should be put. Again, the directory is searched for the associated entry,
and the system needs to keep a read pointer to the location in the file where the next read is to take place.
Once the read has taken place, the read pointer is updated. Both the read and write operations use this
same pointer, saving space and reducing system complexity.
 Repositioning within a file: The directory is searched for the appropriate entry, and the current-file-
position pointer is repositioned to a given value.This file operation is also known as a file seek.
 Deleting a file: To delete a file, search the directory for the named file. Having found the associated
directory entry, release all file space, so that it can be reused by other files, and erase the directory entry.
 Truncating a file: The user may want to erase the contents of a file but keep its attributes. Rather than
forcing the user to delete the file and then recreate it, this function allows all attributes to remain
unchanged—except for file length—but lets the tile be reset to length zero and its file space released.

File types: A common technique for implementing file types is to include the type as part of the file name. The
name is split inro two parts: a name and an extension, usually separated by a period. The system uses the
extension to indicate the type of operations that can be done on that file. The following table indicates different
types of files.

File Structures:

File structures describe how data is organized and stored within files. Different file structures include:

1. Sequential File Structure: Data is stored sequentially and accessed sequentially (e.g., text files).

2. Indexed File Structure: Data is organized with an index to allow direct access (e.g., indexed files in
databases).

3. Hashed File Structure: Data is accessed using a hash function, providing fast access (e.g., hash tables).

4. Clustered or Contiguous File Structure: Data is stored in contiguous blocks on disk to minimize seek time.

5. Distributed File Structure: Data is distributed across multiple systems or nodes in a network (e.g.,
distributed file systems).

Each type of file structure offers advantages and disadvantages depending on factors such as access patterns,
storage media characteristics, and performance requirements.

b) Discuss how swap space is used, where swap space is located on disk, and how swap space is managing.

A) Swap space, also known as virtual memory, is an area on a hard disk or SSD that the operating system uses as a
temporary storage extension of physical memory (RAM). It plays a critical role in managing memory resources
efficiently, especially when physical RAM becomes fully utilized.
Usage of Swap Space:

1. Memory Paging: When a system's physical RAM is insufficient to hold all the currently running processes
and their data, the operating system moves (pages) some of the less frequently accessed or inactive
memory pages from RAM to swap space on disk. This process is called paging or swapping.

2. Inactive Processes: If a process has been idle for some time or is not actively using its allocated memory,
its pages may be swapped out to free up physical RAM for more critical tasks.

3. Overcommitment: Some operating systems use swap space to allow for more memory allocation than
physically available, relying on the assumption that not all allocated memory will be actively used
simultaneously.

Location of Swap Space on Disk:

 Partition: Swap space is typically allocated in a dedicated partition on a hard disk or SSD. This partition is
reserved exclusively for swap operations and is formatted accordingly by the operating system during
installation or configuration.

 File: Alternatively, swap space can also be configured as a swap file within an existing file system. This file
behaves similarly to a partition but resides within a regular file system rather than a dedicated partition.

Management of Swap Space:

1. Swapping Algorithm: The operating system employs swapping algorithms (e.g., Least Recently Used - LRU)
to decide which pages to swap out to disk when physical memory is low. This decision balances the need
to free up RAM with the potential performance impact of swapping.

2. Monitoring and Tuning: System administrators can monitor swap space usage and performance metrics
using tools like top, vmstat, or system logs. If excessive swapping (thrashing) occurs, it indicates potential
memory shortage and may require adjusting swap space size or adding more physical RAM.

3. Swap Space Size: The size of swap space is configured during system setup or can be adjusted later based
on workload and memory requirements. It should be sufficient to handle occasional spikes in memory
usage but ideally not heavily relied upon for regular operations.

4. Priority and Control: Some operating systems allow prioritizing which processes or memory pages are
more likely to be swapped out (swappiness setting in Linux), providing finer control over swap behavior
and system performance.

9. a) Describe the most common schemes for defining the logical structure of a directory.
A) The logical structure of a directory refers to how directories and files are organized and represented
within a file system. There are several common schemes or structures used to define this organization:
1. Single-Level Directory Structure:
 Description: In a single-level directory structure, all files are contained in a single directory
without subdirectories.
 Advantages:
o Simple and easy to implement.
o Minimal overhead in terms of memory and storage.
 Disadvantages:
o Lack of organization or hierarchy can lead to difficulty in managing large numbers of files.
o Naming conflicts can occur if files have identical names.
2. Two-Level Directory Structure:
 Description: Each user has their own directory, and there is a master directory that lists all user
directories.
 Advantages:
o Provides a basic level of organization by separating user files into individual directories.
o Reduces naming conflicts compared to a single-level structure.
 Disadvantages:
o Limited scalability for large numbers of users or files.
o Managing permissions and access control can become complex.
3. Hierarchical Directory Structure:
 Description: Organizes directories and files into a tree-like structure, similar to a file system tree.
 Advantages:
o Allows for hierarchical organization of files and directories, providing a clear structure.
o Supports efficient management of large numbers of files by grouping them into
categories or themes.
o Facilitates easier navigation and access to files.
 Disadvantages:
o Increased complexity in implementation and management compared to simpler
structures.
o Potential for deep directory hierarchies leading to longer access paths.
4. Acyclic-Graph Directory Structure:
 Description: Allows directories to be linked in a more complex structure where directories can
have multiple parent directories or be linked in non-linear ways.
 Advantages:
o Offers flexibility in organizing and linking directories and files.
o Supports advanced features like symbolic links or shortcuts.
 Disadvantages:
o Complexity in implementation and maintenance.
o Increased risk of circular references or loops, which can lead to navigation and access
issues.
5. General Graph Directory Structure:
 Description: Represents directories and files as nodes in a general graph structure, where
relationships between directories can be arbitrary.
 Advantages:
o Maximum flexibility in defining relationships between directories and files.
o Supports complex scenarios and relationships not easily accommodated by other
structures.
 Disadvantages:
o Extremely complex to implement and manage.
o Potential for performance degradation due to traversal and access operations.
b) Describe any two disk scheduling algorithms with suitable illustrations.
A) Disk scheduling algorithms are crucial in managing the movement of the disk arm and optimizing the
order of disk accesses to minimize seek time and improve overall disk performance. Two common disk
scheduling algorithms are FCFS (First-Come, First-Served) and SSTF (Shortest Seek Time First).
1. First-Come, First-Served (FCFS) Disk Scheduling Algorithm:
 Description: In FCFS, requests are serviced in the order they arrive in the disk queue. The disk
arm starts at one end of the disk and moves towards the other end, serving each request
sequentially.
 Illustration:
Let's consider a disk with tracks numbered from 0 to 199. Assume the following sequence of disk access
requests:
Request Queue: 98, 183, 37, 122, 14, 124, 65, 67
Starting from track 53 (initial position of disk arm), the FCFS algorithm will service the requests in the
order they are listed:
Starting Position: 53

FCFS Order: 53 -> 98 -> 183 -> 37 -> 122 -> 14 -> 124 -> 65 -> 67

Total Head Movement: Calculated as the sum of absolute differences between consecutive tracks
accessed.
Advantages:
 Simple and easy to implement.
 Fair in terms of servicing requests based on arrival time.
Disadvantages:
 May result in inefficient disk access patterns and high seek times if requests are scattered across
the disk.
 Does not consider minimizing seek time.
2. Shortest Seek Time First (SSTF) Disk Scheduling Algorithm:
 Description: SSTF selects the request that is closest to the current position of the disk arm. It
minimizes seek time by choosing the next request based on the shortest seek time from the
current position.
 Illustration:
Using the same disk and request queue as before, starting from track 53:
Request Queue: 98, 183, 37, 122, 14, 124, 65, 67
SSTF will select the request that requires the least movement from the current position:
Starting Position: 53

SSTF Order: 53 -> 65 -> 67 -> 37 -> 14 -> 98 -> 122 -> 124 -> 183

Total Head Movement: Calculated as the sum of absolute differences between consecutive tracks
accessed.
Advantages:
 Reduces average seek time compared to FCFS.
 Provides better performance in terms of response time and throughput.
Disadvantages:
 May lead to starvation of some requests if there are always requests closer to the current position.
 Could potentially lead to increased disk arm movement if requests are clustered far apart initially.

10. a) Describe the various levels of file protection mechanisms that can be implemented in a file
system. How do these mechanisms contribute to safeguarding files?
A)File protection mechanisms in a file system are essential for ensuring data security, integrity, and
privacy. These mechanisms operate at various levels, each providing a different layer of protection. Here
are the primary levels of file protection mechanisms:

1. User-Level Protection*
a. User Authentication*
Description*: Verifying the identity of users before granting access to the file system. Common
methods include passwords, biometric scans, and multi-factor authentication.
Contribution*: Ensures that only authorized users can access the system, reducing the risk of
unauthorized access.

b. User Permissions
Description*: Assigning specific permissions (read, write, execute) to users or groups.
Contribution*: Limits actions that users can perform on files, preventing unauthorized
modifications or deletions.

2. File-Level Protection

a. Access Control Lists (ACLs)*


Description*: Detailed lists that specify individual user permissions for each file or directory.
Contribution*: Provides fine-grained control over file access, allowing precise specification of
who can do what with each file.

b. File Encryption*
Description*: Encoding files so that they can only be read by users who have the decryption key.
Contribution*: Protects the confidentiality of files, ensuring that even if files are accessed by
unauthorized users, they cannot be read without the key.

3. *Directory-Level Protection*

a. Directory Permissions*
Description*: Setting permissions for directories to control access to the files contained within them.
- *Contribution*: Provides a higher-level control mechanism, which can simplify management by setting
permissions at the directory level rather than individually for each file.

4. *System-Level Protection*

a. File System Encryption*


Description*: Encrypting the entire file system so that all data within it is protected.
Contribution*: Ensures comprehensive protection of all files, useful in scenarios where the physical
security of storage devices is a concern.

b. Backup and Recovery Systems*


Description*: Regularly copying files to a secure location for recovery in case of data loss or corruption.
Contribution*: Protects against data loss due to accidental deletion, corruption, or system failures,
ensuring data availability and integrity.

c. File System Permissions*


- *Description*: Implementing permissions at the file system level to control how files are accessed and
modified.
- *Contribution*: Enhances security by providing another layer of control over file operations across the
entire file system.
5. *Network-Level Protection*

a. Network Access Controls*


- *Description*: Implementing firewalls, VPNs, and other network security measures to control access to
file systems over a network.
- *Contribution*: Protects files from unauthorized access through network connections, ensuring that
only legitimate users can access file system resources remotely.

b. Secure File Transfer Protocols*


Description*: Using protocols like SFTP (Secure File Transfer Protocol) and FTPS (FTP Secure) for secure
file transfers.
Contribution*: Ensures the security and integrity of files during transfer, preventing interception and
unauthorized access during transmission.

6. *Application-Level Protection*

a. Application-Specific Security Measures*


Description*: Implementing security features within applications that access files, such as data validation
and application-level encryption.
Contribution*: Protects files by ensuring that applications handle data securely and do not introduce
vulnerabilities that could be exploited.

These protection mechanisms contribute to safeguarding files in the following ways:

Prevent Unauthorized Access*: By ensuring that only authenticated and authorized users can access
files.
protect Data Confidentiality*: Through encryption and secure access controls, sensitive data remains
confidential.
Maintain Data Integrity*: By controlling write and modify permissions, only authorized changes are
allowed, preserving data integrity.
Ensure Availability*: Backup systems and redundancy ensure that files are available even in case of
hardware failures or accidental deletions.
Secure File Transfer*: Protects data during transmission, preventing interception and unauthorized
access during transfers.

b) Why do we need free space management? Explain various methods to achieve free space
management.
A) Free space management is crucial in file systems to effectively utilize and allocate available storage
space on disk or other storage devices. Proper management of free space ensures that the file system can
accommodate new data, optimize storage efficiency, and maintain performance over time. Here's why free
space management is necessary and the various methods used to achieve it:
Importance of Free Space Management:
1. Optimal Disk Utilization: Efficiently managing free space prevents fragmentation and ensures
that space is available for new files and modifications.
2. Performance Optimization: Fragmented or insufficient free space can degrade disk performance
due to increased seek times and reduced data locality.
3. Preventing Disk Full Scenarios: Monitoring and managing free space prevents situations where
the disk runs out of space, which can lead to system crashes or data loss.
4. Data Integrity: Ensuring sufficient free space helps maintain data integrity by reducing the risk of
file corruption or incomplete data writes.
Methods for Free Space Management:
1.Bit Vector: The free-space list is implemented as a bit map or bit vector. Each block is
represented by 1 bit. If the block is free, the bit is 1; if the block is allocated, the bit is 0. For example,
consider a disk where blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18, 25,26, and 27 are free and the rest of the
blocks are allocated. The free-space bit map would be 001111001111110001100000011100000 ... The
main advantage of this approach is its relative simplicity and its efficiency in finding the first free block or
n consecutive free blocks on the disk.
2.Linked List: Another approach to free-space management is to link together all the free disk
blocks, keeping a pointer to the first free block in a special location on the disk and caching it in memory.

This first block contains a pointer to the next free disk block, and so on. In the above example, keep a
pointer to block 2 as the first free block. Block 2 would contain a pointer to block 3, which would point to
block 4, which would point to block 5, which would point to block 8, and so on. However; this scheme is
not efficient; to traverse the list, we must read each block, which requires substantial I/O time.
Grouping: A modification of the free-list approach is to store the addresses of n free blocks in the first
free block. The first n-1 of these blocks are actually free. The last block contains the addresses of another
n free blocks, and so on. The addresses of a large number of free blocks can now be found quickly, unlike
the situation when the standard linked-list approach is used.
Counting: Rather than keeping a list of n free disk addresses, we can keep the address of the first free
block and the number n of free contiguous blocks that follow the first block. Each entry in the free-space
list then consists of a disk address and a count.

~~S.Jaya Krishna
23BQ5A0523

You might also like