Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

OS MQP SOLVED BY CHAMMY

1
OS MQP SOLVED BY CHAMMY

2
OS MQP SOLVED BY CHAMMY

1a) Distinguish between the following terms. (i) Multiprogramming and Multitasking (ii) Multiprocessor System
and Clustered System

i)

ii)

3
OS MQP SOLVED BY CHAMMY

1b) Define operating Systems. Explain the dual-mode operating system with a neat diagram
An operating system is system software that acts as an intermediary between a user of a
computer and the computer hardware. It is software that manages the computer hardware and
allows the user to execute programs in a convenient and efficient manner.
Some examples of operating systems are UNIX, Mach, MS-DOS, MS-Windows, Windows/NT, Chicago, OS/2,
MacOS, VMS, MVS, and VM

Dual-Mode Operation:

1. Purpose:
- To ensure that an error in a user program cannot cause problems to other programs and the operating
system.
- Achieved by utilizing hardware support to differentiate between two modes of execution: user mode and
kernel mode.
2. Mode Indication:
- A hardware bit, known as the mode bit, distinguishes between kernel mode (0) and user mode (1).
- Indicates whether the currently executing task is performed by the operating system or a user application.
3. Mode Transition:
- At system boot time, the hardware starts in kernel mode.
- Operating system loaded and starts user applications in user mode.
- Transition from user to kernel mode occurs when a user application requests a service from the operating
system via a system call.
- Whenever a trap or interrupt occurs, the hardware switches from user mode to kernel mode automatically.
4. Protection Mechanism:
- Hardware allows privileged instructions to execute only in kernel mode.
- Attempting to execute a privileged instruction in user mode triggers an illegal instruction trap, transferring
control to the operating system.
- Examples of privileged instructions include those involved in switching between user mode and kernel
mode.
5. Control Flow:
- Initial control resides within the operating system, executing instructions in kernel mode.
- When control is transferred to a user application, the mode is set to user mode.
- Eventually, control returns to the operating system through interrupts, traps, or system calls, switching back
to kernel mode.

4
OS MQP SOLVED BY CHAMMY

1c) With a neat diagram, explain the concept of the virtual machine.
-The fundamental idea behind a virtual machine is to abstract the hardware of a single computer (the CPU,
memory, disk drives, network interface cards, and so forth) into several different execution environments,
thereby creating the illusion that each separate execution environment is running its own private computer.
- Creates an illusion that a process has its own processor with its own memory.
- Host OS is the main OS installed in system and the other OS installed in the system are called guest OS.

Implementation:-
-Creating a virtual machine is beneficial but quite hard.
-It involves replicating the real hardware, including its user and kernel modes.
-The virtual machine software operates in kernel mode, while the virtual machine runs in user mode.
Benefits:-
- Virtual machines enable sharing hardware among multiple operating systems.
- They provide isolation, ensuring each virtual machine is protected from others.
- Software resources can be shared between virtual machines, facilitating communication.
- They eliminate the need to halt the entire system for testing or development.
- Multiple operating systems can run concurrently on a single system, aiding rapid testing and porting.
- System consolidation allows multiple systems to run on a single machine, enhancing resource utilization.
- Simulation involves running guest programs on an emulator that translates instructions for the host system.

2a) Explain the layered approach of operating system structure with a supporting diagram.
1. Layered Structure:
- The OS is organized into multiple layers, each building upon the one below it.
- The hardware forms the bottom layer (layer 0), while the user interface represents the topmost layer.
2. Functionality Division:
- Each layer performs specific functions and relies on services provided by the layer immediately below it.
- For instance, layer 1 may handle device management, while layer 2 manages file systems.
3. Simplicity and Debugging:
- Modular design simplifies construction, debugging, and maintenance.
- Layers can be debugged independently, starting from the lowest layer, making error isolation easier.
4. Encapsulation:
- Higher layers do not need to know the implementation details of lower layers.

5
OS MQP SOLVED BY CHAMMY

- This promotes modularity and abstraction, enhancing system flexibility.


5. Clear Hierarchy:
- The layered structure provides a clear hierarchy of functionality, aiding in system understanding and
management.
6. Disadvantages:
- Overhead: Interactions between layers may incur overhead, impacting system performance.
- Definition Complexity: Properly defining the boundaries and responsibilities of each layer is crucial.
- Potential Inefficiency: Traversing through multiple layers for certain operations may result in inefficiency
compared to other architectures.

2b) What are system calls? Briefly point out its types with illustrations.

System calls are interfaces provided by the operating system that allow user programs to request services
from the OS. These services range from basic input/output operations to process control and file management.

Types of System Calls:


1. Process Control:
- Involve creating, terminating, and managing processes.
- Examples include creating a process, terminating a process, and waiting for events.
2. File Management:
- Handle file-related operations such as creating, reading, writing, and deleting files.
- Examples include creating a file, reading from a file, and setting file attributes.
3. Device Management:
- Manage access to physical and virtual devices.
- Examples include requesting a device, releasing a device, and reading from a device.
4. Information Maintenance:
- Maintain system and process information, such as time, date, and system data.
- Examples include getting system attributes and setting process attributes.
5. Communication:
- Facilitate communication between processes, including message passing and shared memory.
- Examples include creating communication connections, sending messages, and attaching remote devices.
6. Protection:
- Control access to system resources and ensure security.
- Examples include adjusting access permissions and granting elevated access under controlled
circumstances.

6
OS MQP SOLVED BY CHAMMY

2c) Explain the services of the operating system that are helpful for the user and the system
An operating system provides an environment for the execution of programs. It provides certain services to
programs and to the users of those programs

Services Provided by the Operating System:


For Users:
1. User Interfaces:
- Command Line Interface (CLI): Allows users to issue commands directly to the system.
- Graphical User Interface (GUI): Provides a visual interface for interacting with the system.
- Batch Command Systems: Executes commands and directives stored in a file.
2. Program Execution:
- Loads programs into RAM, executes them, and terminates them as necessary.
3. I/O Operations:
- Manages data transfer to and from I/O devices such as keyboards, printers, and files.
- Provides device drivers for specific hardware components.
4. File-System Manipulation:
- Allows programs to read, write, create, delete, and modify files and directories.
5. Communications:
- Facilitates inter-process communication (IPC) between processes running on the same or different
processors.
6. Error Detection:
- Detects and handles hardware and software errors, ensuring system stability and reliability.

For System Efficiency:


1. Resource Allocation:
- Manages allocation of CPU cycles, memory, storage space, and I/O devices to multiple users and jobs
concurrently.
2. Accounting:
- Tracks system activity and resource usage for billing or statistical purposes, aiding in performance
optimization.
3. Protection and Security:
- Controls access to system resources to prevent interference between processes and ensure data security.
- Implements measures such as password protection to secure the system from unauthorized access.

7
OS MQP SOLVED BY CHAMMY

3a) With a neat diagram, explain the states of a process with a transition diagram and process control block.

Process State
A Process has 5 states. Each process may be in one of the following states –
1. New - The process is in the stage of being created.
2. Ready - The process has all the resources it needs to run. It is waiting to be assigned to
the processor.
3. Running – Instructions are being executed.
4. Waiting - The process is waiting for some event to occur. For example, the process may
be waiting for keyboard input, disk access request, inter-process messages, a timer to go
off, or a child process to finish.
5. Terminated - The process has completed its execution

The Process Control Block (PCB) is a data structure used by the operating system to manage processes
efficiently. It contains essential information about each process, including:

Process State: Shows if the process is new, ready, running, waiting, or terminated.
Program Counter: Points to the next instruction to be executed.
CPU Registers: Stores important CPU information like accumulators and stack pointers.
CPU Scheduling Info: Includes process priority and scheduling queue pointers.
Memory-Management Info: Holds data about memory allocation, like base and limit registers.
Accounting Info: Tracks resource usage, CPU time, and process IDs.
I/O Status Info: Manages I/O operations, like allocated devices and open files.

The PCB helps the OS efficiently handle processes, allocate resources, and maintain proper process
coordination.

8
OS MQP SOLVED BY CHAMMY

3b) What is inter-process communication? Discuss message passing and the shared memory concept of IPC.

Interprocess Communication (IPC) :


Processes executing may be either co-operative or independent processes.
It allows processes to communicate and synchronize with each other, enhancing collaboration and
coordination in a computer system.
Shared Memory Systems:
1. Shared memory facilitates interprocess communication by creating a region of memory accessible to
multiple processes.
2. Processes exchange data by reading from and writing to the same memory location, enabling efficient
transfer of large blocks of data.
3. Once established, shared memory incurs minimal overhead, making it suitable for quick data exchange
between processes.
4. Processes coordinate access to shared memory to ensure proper communication without frequent system
calls.
5. Examples of shared memory usage include implementing buffers in producer-consumer scenarios.

Message Passing Systems:


1. Message passing systems rely on system calls like "send" and "receive" to facilitate interprocess
communication.
2. Processes establish communication links before exchanging messages to ensure proper routing and delivery.
3. This approach is suitable for transferring small amounts of data and supports synchronization and buffering
mechanisms.
4. Message passing allows processes to communicate asynchronously or synchronously, offering flexibility in
communication.
5. While message passing incurs overhead due to system calls, it provides a reliable means of communication
between cooperating processes.

9
OS MQP SOLVED BY CHAMMY

3c) Calculate average waiting and turnaround times by drawing the Gantt chart using FCFS and RR (q=2ms).

FCFS Scheduling :-

RR SCHEDULING :-

4a) Discuss in detail the multithreading model, its advantages and disadvantages with suitable illustration.
Many-to-One Model:
Many user-level threads are mapped to one kernel thread.
Advantages:
- Efficient: Management in user space reduces overhead.
- Lightweight: Minimal overhead for thread creation and management.

10
OS MQP SOLVED BY CHAMMY

- Simple: Implementation is straightforward.

Disadvantages:
- Limited Concurrency: Blocking one thread blocks the entire process.
- Scalability Issues: Limited parallel execution on multiprocessors.
- Dependency on User Space: Limited access to kernel-level features.
example:
- Solaris green threads
- GNU portable threads

One-to-One Model:
Each user thread is mapped to a kernel thread
Advantages:
- True Parallelism: Each user thread maps to a separate kernel thread.
- Scalable: Supports parallel execution on multiprocessors.
- Responsive: Blocking one thread doesn't affect others.

Disadvantages:
- High Overhead: Creating each user thread requires a kernel thread.
- Resource Consumption: Each thread consumes kernel resources.
• example:
- Windows NT/XP/2000, Linux

Many-to-Many Model:

11
OS MQP SOLVED BY CHAMMY

Many user-level threads are multiplexed to a smaller number of kernel threads.


Advantages:
- Flexible: Allows creation of multiple user threads while efficiently using kernel resources.
- Scalable: Kernel threads can run in parallel on multiprocessors.
- Responsive: Kernel can schedule other threads when one blocks.

Disadvantages:
- Complexity: Multiplexing user-level threads adds complexity.
- Overhead: Management overhead for a larger number of threads.
- Deadlock Risk: Concurrent access to shared resources may lead to deadlocks.

Two Level Model


- A variation on the many-to-many model is the two level-model
-Similar to M:N, except that it allows a user thread to be bound to kernelthread.
• forexample:
-HP-UX & Tru64 UNIX

4b) Explain five different scheduling criteria used in the computing scheduling mechanism
SCHEDULING CRITERIA:
In choosing which algorithm to use in a particular situation, depends upon the properties
of the various algorithms. Many criteria have been suggested for comparing CPU scheduling algorithms. The
criteria include the following:
1. CPU Utilization:
- We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range from 0 to 100
percent. In a real system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a
heavily used system).
2. Throughput:
- If the CPU is busy executing processes, then work is being done. One measure of work is the number of
processes that are completed per time unit, called throughput. For long processes, this rate may be one
process per hour; for short transactions, it may be ten processes per second.
3. Turnaround Time:
- This is the important criterion which tells how long it takes to execute that process. The interval from the
time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum
of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing
I/O.
4. Waiting Time:
- The CPU-scheduling algorithm does not affect the amount of time during which a process executes or does
I/O, it affects only the amount of time that a process spends waiting in the ready queue. Waiting time is the
sum of the periods spent waiting in the ready queue.

12
OS MQP SOLVED BY CHAMMY

5. Response Time:
- In an interactive system, turnaround time may not be the best criterion. Often, a process can produce some
output fairly early and can continue computing new results while previous results are being output to the user.
Thus, another measure is the time from the submission of a request until the first response is produced. This
measure, called response time, is the time it takes to start responding, not the time it takes to output the
response. The turnaround time is generally limited by the speed of the output device.

4c) Calculate the average waiting time and the average turnaround time by drawing the Gantt chart using SRTF
and the Priority scheduling algorithm.

SRTF SCHEDULING

Priority Scheduling :-

5a) Define deadlock. What are the necessary conditions for deadlock to occur?

13
OS MQP SOLVED BY CHAMMY

Deadlock is defined as a situation in which two or more competing actions are each waiting for the other to
finish, preventing any action from taking place.

Necessary conditions for deadlock to occur:-


A deadlock situation can arise if the following four conditions hold simultaneously in a system:

1.Mutual Exclusion: This condition requires that at least one resource be non-sharable, meaning only one
process can use it at a time. If another process requests the resource, it must wait until it's released.
2.Hold and Wait: A process holding at least one resource waits to acquire additional resources held by other
processes. This can create a situation where processes are indefinitely waiting for resources while holding onto
others.
3.No Preemption: Resources cannot be forcefully taken away from a process; they can only be released
voluntarily by the process holding them, once its task is complete.
4. Circular Wait: In this scenario, a set of processes exists where each process is waiting for a resource held by
the next process in the set, forming a circular dependency that prevents any of the processes from
progressing.

5b) Illustrate Peterson’s solution for the critical section problem.

- Peterson's Solution:
- A classic software-based solution to the critical-section problem.
- Restricted to two processes, denoted as P0 and P1, which alternate execution between their critical sections
and remainder sections.
- Process Pi is presented along with Pj, representing the other process, where j equals 1 - i.

- Shared Data Structures:


- int turn: Indicates whose turn it is to enter the critical section. If turn == i, then process Pi is allowed to
execute in its critical section.

14
OS MQP SOLVED BY CHAMMY

- boolean flag[2]: Used to indicate if a process is ready to enter its critical section. flag[i] being true means Pi
is ready to enter its critical section.

- Algorithm Overview:
- To enter the critical section, process Pi:
1. Sets flag[i] to true.
2. Sets turn to the value j, indicating that the other process (Pj) can enter its critical section.
3. If both processes try to enter at the same time, turn will be set to both i and j, but only one will last,
ensuring mutual exclusion.

- Ensuring Mutual Exclusion:


- Each process Pi enters its critical section only if either Pj's flag is false or it's Pi's turn.
- If both Pi and Pj could enter their critical sections simultaneously, both their flags would be true, but only
one process can have its turn at a time.

- Ensuring Progress and Bounded Waiting:


- Pi can enter the critical section only if it's not stuck in the loop while Pj is executing its critical section.
- If Pj is not ready to enter the critical section, Pi can enter it.
- If Pj is ready and Pi is waiting, Pi enters the critical section only after Pj finishes.
- Once Pj exits, it allows Pi to enter the critical section. If Pj wants to enter again, it has to let Pi enter first.
- This ensures progress because Pi will eventually enter its critical section after Pj, and bounded waiting as Pi
waits at most once for Pj to finish.

5C) Consider the following snapshot of the system:

15
OS MQP SOLVED BY CHAMMY

16
OS MQP SOLVED BY CHAMMY

17
OS MQP SOLVED BY CHAMMY

6a) Explain different methods to recover from deadlocks

RECOVERY FROM DEADLOCK


The system recovers from the deadlock automatically. There are two options for breaking a deadlock one is
simply to abort one or more processes to break the circular wait. The other is to preempt some resources from
one or more of the deadlocked processes.

Process Termination
To eliminate deadlocks by aborting a process, use one of two methods. In both methods, the system reclaims
all resources allocated to the terminated processes.

1. Abort all deadlocked processes: This method clearly will break the deadlock cycle, but at great expense; the
deadlocked processes may have computed for a long time, and the results of these partial computations must
be discarded and probably will have to be recomputed later.

2. Abort one process at a time until the deadlock cycle is eliminated: This method
incurs considerable overhead, since after each process is aborted, a deadlock-detection
algorithm must be invoked to determine whether any processes are still deadlocked.
If the partial termination method is used, then we must determine which deadlocked process (or processes)
should be terminated.
Factors influencing the selection of processes for termination include:
1. Priority of the process.
2. Remaining computation time needed to complete its task.
3. Resources utilized by the process.
4. Additional resources required to finish.
5. Number of processes to be terminated.
6. Nature of the process (interactive or batch).
Resource Preemption
To eliminate deadlocks using resource preemption, we successively preempt some resources from processes
and give these resources to other processes until the deadlock cycle is broken.
If preemption is required to deal with deadlocks, then three issues need to be addressed:

1. Selecting a Victim:
- Identify which resources and processes to preempt, considering cost minimization factors.
- Factors include the number of resources held by a deadlocked process and its consumed execution time.
2. Rollback:
- When preempting a resource, the affected process cannot proceed normally.
- Roll back the process to a safe state and restart it.
- Total rollback, aborting and restarting the process, is often the simplest solution due to the complexity of
determining a safe state.
3. Starvation Prevention:
- Ensure resources are not always preempted from the same process.
- Implement measures to guarantee fair resource allocation, preventing continual deprivation of resources
from any process.

6b) What is a resource allocation graph? Consider an example to explain how it is very useful in describing a
deadly embrace.

- System resource-allocation graph consists of a set of vertices V and a set of edges E.


-V is partitioned into two types:
-P = {P1, P2, …, Pn}, the set consisting of all the processes in the system
-R = {R1, R2, …, Rm}, the set consisting of all resource types in the system
-A directed edge from process Pi to resource type Rj is denoted by
Pi Rj
-It signifies that process Pi has requested an instance of resource type Rj and is currently
waiting for that resource.

18
OS MQP SOLVED BY CHAMMY

-A directed edge from resource type Rj to process Pi is denoted by


Rj Pi
-It signifies that an instance of resource type Rj has been allocated to process Pi.
-A directed edge Pi Rj is called a request edge; a directed edge Rj Pi is called an
assignment edge.
The resource-allocation graph shown in below depicts the following situation

Given the definition of a resource-allocation graph, it can be shown that, if the


graph contains no cycles, then no process in the system is deadlocked.
If the graph does contain a cycle, then a deadlock may exist.
If each resource type has exactly one instance, then a cycle implies that a
deadlock has occurred.
If the cycle involves only a set of resource types, each of which has only a single
instance, then a deadlock has occurred. Each process involved in the cycle is
deadlocked.
In this case, a cycle in the graph is both a necessary and a sufficient condition
for the existence of deadlock.
If each resource type has several instances, then a cycle does not necessarily
imply that a deadlock has occurred. In this case, a cycle in the graph is a necessary but not
a sufficient condition for the existence of deadlock.

6c) What is a semaphore? State a Dining Philosopher problem gives a solution using semaphore

A semaphore is a synchronization tool is used solve various synchronization


problem and can be implemented efficiently.
The Dining Philosophers Problem is a classic synchronization problem where a set of philosophers sit around a
table, with each philosopher alternating between thinking and eating. The challenge arises from the need to
prevent deadlock and starvation while allowing philosophers to access shared resources (chopsticks in this
case) in a safe manner.

19
OS MQP SOLVED BY CHAMMY

Solution using semaphore:

1. Semaphore Initialization: Initialize five semaphores, one for each chopstick. Initially, set all semaphore
values to 1 to indicate that all chopsticks are available.

semaphore chopstick[5] = {1, 1, 1, 1, 1};

2. Philosopher Actions: Each philosopher follows these steps:


- Tries to pick up the two chopsticks closest to them, one at a time.
- Eats if both chopsticks are acquired.
- Releases both chopsticks after eating.

3. Preventing Deadlock:
- Restrict the maximum number of philosophers allowed to be sitting simultaneously at the table to four.
- Philosophers can pick up both chopsticks only if both are available.
- Use an asymmetric solution where odd-numbered philosophers pick up their left chopstick first and then
the right one, while even-numbered philosophers do the opposite.

By utilizing semaphores to control access to shared resources (chopsticks), this solution ensures that
philosophers can eat without causing deadlock or starvation, thereby effectively addressing the Dining
Philosophers Problem.

7a) What is TLB? Explain TLB in detail with a paging system with a neat diagram

TLB, or Translation Look-aside Buffer, is a hardware cache used to accelerate the translation of virtual
addresses to physical addresses in a computer's memory management unit (MMU). Here's a detailed
explanation of TLB with a paging system:

20
OS MQP SOLVED BY CHAMMY

TLB Overview:
- Purpose: TLB serves as a fast lookup cache for translating virtual addresses to physical addresses, avoiding
the slower access to the page table stored in memory.
- Structure: It consists of key-value pairs, where the key (or tag) represents the virtual address's page number,
and the value contains the corresponding frame number in physical memory.
- Size:Typically, TLBs are small, containing a limited number of entries, ranging from 64 to 1,024, due to
hardware constraints.

TLB Working:
1. Address Translation Request:
- When the CPU generates a logical address, the TLB is presented with the page number portion of the
address.
2. TLB Hit:
- If the TLB contains an entry for the page number (TLB hit), the associated frame number is retrieved
immediately, allowing direct access to the physical memory.
3. TLB Miss:
- If the page number is not found in the TLB (TLB miss), a memory reference to the page table in main
memory is required to retrieve the corresponding frame number.
4. TLB Update:
- Upon a TLB miss, the page number and its corresponding frame number are added to the TLB to expedite
future translations.
5. Replacement Policy:
- If the TLB is full, the operating system selects an entry for replacement, often based on a least recently used
(LRU) or random replacement policy.

TLB Operation:
- Hit Ratio: The percentage of times a page number is found in the TLB, indicating the effectiveness of TLB
caching.

Advantages and Disadvantages:


- Advantages:
- Fast address translation, improving system performance.
- Disadvantages:
- Expensive hardware due to its associative memory design.
- Limited capacity may lead to frequent TLB misses.
- Some TLBs may have fixed entries that cannot be modified.
- Some TLBs store an address-space identifier (ASID) for process-level address space protection.

Diagram:
Paging Hardware with TLB
In the diagram, the TLB sits between the CPU and the main memory, providing quick translations of virtual
addresses to physical addresses, thereby enhancing memory access efficiency.

21
OS MQP SOLVED BY CHAMMY

7b) With the help of a neat diagram, explain the various steps of address binding.

Address Binding:-
• User programs typically refer to memory addresses with symbolic names. These symbolic
names must be mapped or bound to physical memory addresses.
• Address binding of instructions to memory-addresses can happen at 3 different stages.

1. Compile Time - If it is known at compile time where a program will reside in physical
memory, then absolute code can be generated by the compiler, containing actual
physical addresses. However, if the load address changes at some later time, then the
program will have to be recompiled.

2. Load Time - If the location at which a program will be loaded is not known at compile
time, then the compiler must generate relocatable code, which references addresses
relative to the start of the program. If that starting address changes, then the program
must be reloaded but not recompiled.

3. Execution Time - If a program can be moved around in memory during the course of its
execution, then binding must be delayed until execution time

22
OS MQP SOLVED BY CHAMMY

7c) Consider the page reference string: 1,0,7,1,0,2,1,2,3,0,3,2,4,0,3,6,2,1 for a memory with three frames.
Determine the number of page faults using the FIFO, Optimal, and LRU replacement algorithms. Which
algorithm is most efficient?

23
OS MQP SOLVED BY CHAMMY

24
OS MQP SOLVED BY CHAMMY

8a) What is demand paging? Explain the steps in handling page faults using the appropriate diagram.

Demand paging:-
A demand paging is similar to paging system with swapping when we want to execute a process we swap the
process the in to memory otherwise it will not be loaded in to memory.
Page fault
If a page is needed that was not originally loaded up, then a page fault trap is generated.

Steps in handling page fault:-


1. The memory address requested is first checked, to make sure it was a valid memory
request.
2. If the reference is to an invalid page, the process is terminated. Otherwise, if the page is not present in
memory, it must be paged in.
3. A free frame is located, possibly from a free-frame list.
4. A disk operation is scheduled to bring in the necessary page from disk.
5. After the page is loaded to memory, the process's page table is updated with the new frame number, and
the invalid bit is changed to indicate that this is now a valid page reference.
6. The instruction that caused the page fault must now be restarted from the beginning

8b) What is segmentation? Explain the basic method of segmentation with an example
Segmentation:
- Segmentation organizes memory into segments, each with a unique name and length, providing a more
flexible memory-management scheme.
Basic Method of Segmentation:
1. Memory is divided into logical address spaces, comprising segments identified by unique names.
2. Each segment represents a distinct part of the program, such as code, global variables, heap, stack, etc.
3. Addresses include both the segment name and an offset within the segment, enabling precise memory
access.
4. During compilation, the compiler automatically constructs segments based on the program's structure.

25
OS MQP SOLVED BY CHAMMY

5. For example, a C program may have segments for code, global variables, heap, and function call stacks
where
• Code segment: Holds program instructions.
• Data segment: Stores global variables.
• Heap segment: Used for dynamic memory allocation.
• Stack segment: Stores function call information.
6. Segmentation facilitates organized memory management, simplifying access to program components.

8c) Discuss the structure of the page table with a suitable diagram
- Hierarchical Paging:
- Problem & Solution: Large logical address space necessitates smaller page tables. Hierarchical paging divides
the page table into smaller sections to address this issue.
- Usage: Common in systems with extensive logical address spaces.
- Mechanism: Page table is structured hierarchically, reducing individual page table sizes.
- Description: Hierarchical organization of pages enables efficient memory management.
- Advantage: Efficiently manages large logical address spaces, reducing memory overhead.
- Disadvantage: Increases complexity in address translation due to multi-level page table lookups.
A logical address (on a 32-bit machine with a 1K page size) is divided into:
-Page number consisting of 22 bits
-Page offset consisting of 10 bits
-Since the page table is paged, the page number is further divided into:
-12-bit page number
-10-bit page offset
Thus, a logical address is structured as follows:

26
OS MQP SOLVED BY CHAMMY

- Hashed Page Tables:


- Problem & Solution: Handling large address spaces requires efficient lookup mechanisms. Hashed page
tables resolve this by hashing virtual page numbers.
- Usage: Ideal for systems with extensive address spaces requiring efficient virtual-to-physical mapping.
- Mechanism: Hashed value represents virtual page number, with collision handling using linked lists.
- Description: Hashing virtual page numbers enables quick lookup and mapping.
- Advantage: Effective for accommodating a wide range of virtual page numbers.
- Disadvantage: Increased overhead due to collision handling, impacting performance.

- Inverted Page Tables:


- Problem & Solution: Traditional page tables consume substantial memory. Inverted page tables address this
by maintaining one entry per real memory page.
- Usage: Common in systems with limited physical memory to optimize usage.
- Mechanism: Each entry holds virtual address and process information.
- Description: Maintaining one entry per real memory page optimizes memory usage.
- Advantage: Reduces memory overhead, optimizing memory usage.
- Disadvantages:
- Slower access times due to full table searches for each memory reference.
- Complexity in implementing shared memory due to table structure.

27
OS MQP SOLVED BY CHAMMY

9a) What is a file? What are its attributes? Explain file operations.

A file is a named collection of related information recorded on secondary storage, defined by its creator, which
can encompass various types of data such as source programs, object programs, numeric data, text, payroll
records, graphic images, and sound recordings.
File Attributes are :-
1. Name
2. Identifier
3. Type
4. Location
5. Size
6. Protection
7. Time, date, and user identification

9a) Explain file operations.

9b) Explain in detail about various file operations in a file system.

File Operations are :-

1)Creating a file:
-Allocate space in the file system and create an entry in the directory for the new file.
-Assign a unique identifier to the file within the file system.
-The file becomes independent of the process, user, or system that created it.

2)Writing a file:
-Use a system call to specify the file name and the data to write.
-System searches the directory to locate the file and maintains a write pointer for the next write location.
-Update the file content with the provided data, potentially expanding its size.

3)Reading a file:
-Use a system call to specify the file name and where to put the read data.
-System searches the directory for the file and maintains a read pointer for the next read location.
-Retrieve the requested data from the file and update the read pointer.

4)Repositioning within a file:

28
OS MQP SOLVED BY CHAMMY

-Search the directory for the file entry.


-Reposition the current file position pointer to a given value, enabling random access.
-No actual I/O operation is involved in repositioning.

5)Deleting a file:
-Search the directory for the file and erase its entry.
-Release file space for reuse by other files, effectively removing it from the file system.

6)Truncating a file:
-Reset the file to length zero while retaining its attributes.
-Release file space for reuse without deleting the file entry from the directory.

7) File locking mechanisms:


- Prevent simultaneous access to a file by multiple processes.
- Ensure data consistency and integrity during concurrent file operations.
- Types of locks include shared locks (read locks) and exclusive locks (write locks).

8) File attributes:
- Metadata associated with files, such as permissions, timestamps, and file size.
- Accessed and modified using system calls or file management utilities.
- Attributes vary across different file systems and operating systems.

9) Error handling:
- System calls return error codes to indicate file operation success or failure.
- Errors may occur due to insufficient permissions, disk full, or other issues.
- Applications handle errors gracefully by checking return values and taking appropriate action.

10) Buffering and caching:


- Improve file I/O performance by reducing disk access latency.
- Buffering involves temporarily storing data in memory before writing it to disk.
- Caching retains frequently accessed data in memory to speed up subsequent reads.

11) File system consistency:


- Ensures that file system data structures remain intact and coherent.
- Achieved through techniques like journaling, consistency checks, and transaction mechanisms.
- Prevents data corruption and maintains system reliability.

12) File compression and encryption:


- Techniques used to reduce file size or enhance data security.
- Compression algorithms reduce the size of files to save storage space.
- Encryption methods encode file contents to prevent unauthorized access or tampering.

13)Backup and recovery:


- Processes for creating copies of files to protect against data loss.
- Regular backups are essential for disaster recovery and data restoration.
- Backup strategies include full backups, incremental backups, and differential backups.

14) Directory traversal:


-Navigating through directory structures to locate files and directories.
-Facilitating efficient file system exploration and management.
-Utilization of traversal techniques such as depth-first search and breadth-first search.
-Support for tasks such as file search, directory listing, and file system navigation.

29
OS MQP SOLVED BY CHAMMY

9c) Discuss various directory structures with neat diagrams.


The most common schemes for defining the logical structure of a directory are described below
1. Single-level Directory
2. Two-Level Directory
3. Tree-Structured Directories
4. Acyclic-Graph Directories
5. General Graph Directory

1. Single-level Directory:
- All files are stored in a single directory.
- Simple to implement and understand.
- However, it becomes impractical as the number of files grows due to naming conflicts and difficulty in
managing large numbers of files.

2. Two-Level Directory:
- Each user has a separate directory under a master directory.
- Provides efficient file organization and search within user directories.
- Users are isolated, limiting collaboration and file sharing between users.

3. Tree-Structured Directories:
- Organized in a hierarchical tree-like structure with a root directory and subdirectories.
- Allows for a systematic organization of files with unique paths for each file.
- Enables users to access files in other directories, but longer path names may be cumbersome.

30
OS MQP SOLVED BY CHAMMY

4. Acyclic Graph Directories:


- Allows directories to share subdirectories and files.
- Supports efficient sharing of common files or directories without duplication.
- Management of multiple paths to the same file and deletion of shared files pose challenges.

5. General Graph Directory:


- Limits redundant searches by restricting the number of directories accessed during a search.
- Handles cyclic directory structures and manages reference counts to ensure proper deletion of files.
- Utilizes garbage collection to reclaim unused space and maintain directory integrity.

10a) Explain contiguous and linked disk space allocation methods.


Contiguous Allocation:
1. Definition: Files are stored in contiguous blocks on the disk, occupying sequential disk space.
2. Accessing Files: Simple access with starting block number and length required.
3. Allocation Method: Each file is defined by its starting block address and length, with subsequent blocks
following sequentially.
4. Access Efficiency: Supports efficient sequential and direct access operations.

31
OS MQP SOLVED BY CHAMMY

5. Challenges:
- Finding Space: Difficult to find contiguous space for new files, depending on the free space management
system.
- Satisfying Requests: Issues in satisfying requests from a list of free holes, commonly addressed using first-fit
or best-fit strategies.
- External Fragmentation: Free space fragmentation occurs as files are allocated and deleted, leading to
inefficient use of disk space.
6. Pros: Straightforward access, efficient for both sequential and direct access.
7. Cons: Difficulty in finding contiguous space for new files, challenges with external fragmentation.
8. Use Cases: Suitable for systems with predictable file sizes and where file access patterns are known in
advance.

FIG: Contiguous Allocation FIG: Linked Allocation

Linked Allocation:
1. Definition: Files are stored as linked lists of disk blocks, with blocks scattered across the disk.
2. File Structure: Each file represented by a linked list of blocks, with directory entries containing pointers to
the first and last blocks.
3. File Creation: Creating a new file involves adding an entry in the directory with pointers initialized to nil.
4. Writing to Files: Writing to a file requires finding a free block, writing data to it, and linking it to the end of
the file's linked list.
5. Reading from Files: Reading involves following pointers from block to block, allowing access to scattered
blocks.
6. Advantages:
- No external fragmentation as any free block can be used to satisfy a request.
- File size need not be declared at creation, and files can grow dynamically.
7. Disadvantages:
- Effective primarily for sequential-access files, as accessing a specific block requires traversing the entire
linked list.
- Requires additional space for pointers, which can be minimized using cluster allocation.
8. Reliability: Relies on scattered pointers for file linkage, posing a risk of data loss if a pointer is lost or
damaged.
10b) Explain the access matrix method of system protection with the domain as objects and its
implementation
Access Matrix
- The model of protection can be viewed abstractly as a matrix, called an access matrix.
- The rows of the access matrix represent domains, and the columns represent objects.
Each entry in the matrix consists of a set of access rights.

32
OS MQP SOLVED BY CHAMMY

- The entry access(i,j) defines the set of operations that a process executing in domain Di can invoke on object
Oj.
- To illustrate these concepts, we consider the access matrix shown in Figure below.

- There are four domains and four objects-three files (F1, F2, F3) and one laser printer. A process executing in
domain D1 can read files F1 and F3 .
- A process executing in domain D4 has the same privileges as one executing in domain D1; but in addition, it
can also write onto files F1 and F3.
- Note that the laser printer can be accessed only by a process executing in domain D2.
- The access-matrix scheme provides us with the mechanism for specifying a variety of
policies.
- The mechanism consists of implementing the access matrix and ensuring that the
semantic properties we have outlined indeed hold.
- More specifically, we must ensure that a process executing in domain Di can access
only those objects specified in row i, and then only as allowed by the access-matrix
entries.

Implementation of Access Matrix

How can the access matrix be implemented effectively? In general the matrix will besparse; that is, most of the
entries will be empty. Although data structure techniques are
available for representing sparse matrices, they are not particularly useful for this application, because of the
way in which the protection facility is used.
Methods:

• Global Table
• Access Lists for Objects
• Capability Lists for Domains
• A Lock-Key Mechanism

1. Global Table:
- Simplest implementation with ordered triples <domain, object, rights-set> stored in a file.
- Upon operation execution, search for <Di, Oj, Rk> in the table to determine access rights.
2. Access Lists for Objects:
- Each column represented as an access list for an object, containing ordered pairs <domain, rights-set>.
- Search access list for object Oj to determine if operation is allowed for domain Di.
3. Capability Lists for Domains:
- Domain's capability list contains objects with allowed operations.
- Process executes operation with specified capability, granting access if capability is possessed.
4. Lock-Key Mechanism:

33
OS MQP SOLVED BY CHAMMY

- Each object has a list of unique locks, and each domain has a list of unique keys.
- Process can access object only if domain possesses a key matching one of the object's locks.

10c) Given the following sequences 95,180,34,119,11,123,62,64 with the track 50 and ending track 199.
What is the total disk travelled by the disk arm using FCFS, SSTF, LOOK and CLOOK algorithm

First Come, First Serve (FCFS):


- Requests are handled in the order they arrive, with each new request added to the end of the queue.
- Head moves sequentially from one request to the next.
- Total head movement is determined by the sum of the distances between consecutive requests.
- This method tends to oscillate between distant tracks, resulting in inefficient movement.
- Total head movement: 640 tracks.

Shortest Seek Time First (SSTF):


- Requests are served based on the shortest distance from the current head position.
- Prioritizes shorter distances over the order of requests.
- However, there's a risk of starvation if there are many close requests, as distant ones might never be
serviced.
- Total head movement: 236 tracks.

34
OS MQP SOLVED BY CHAMMY

Circular Scan (C-SCAN):


- Mimics elevator behavior, scanning towards one end and jumping to the other end when reaching it.
- Offers a more balanced movement across the disk compared to FCFS and SSTF.
- The total head movement is relatively low, only 187 tracks.

C-LOOK:
- An enhancement of C-SCAN, which doesn't go beyond the last request in the direction of movement.
- Instead of scanning to the very end, it jumps to the furthest request in the direction of movement.
- Offers more efficient movement and better disk access times.
- Reduces total head movement further compared to C-SCAN, down to 157 tracks.

35

You might also like