Professional Documents
Culture Documents
OS Question Bank 5/8888
OS Question Bank 5/8888
questions for exam preparation. Please note that the coverage of the
entire syllabus is recommended and necessary.
Multi-Tasking: Multi-tasking allows multiple tasks or processes to run concurrently on a single CPU.
The CPU switches rapidly between tasks, giving users the illusion of parallel execution. Each task gets
a slice of CPU time to execute.
Multiprocessing: In a multiprocessing system, multiple CPUs work together to execute multiple tasks
simultaneously. Each CPU has its own memory and resources, enabling true parallel processing. Tasks
can be divided among CPUs for efficient execution.
6. What is Process? Draw the Five State Process Model and Explain it.
:-In operating systems, a process represents a program in execution. The Five State Process Model illustrates
the various states a process can be in during its lifecycle. These states include New, Ready, Running, Waiting,
and Terminated.
1. New: The process is being created.
2. Ready: The process is ready to run and waiting for the CPU.
:-
8. Write the difference between the user-level thread and the Kernal-level thread.
Waiting Time: The total time a process spends waiting in the ready queue.
Turnaround Time: The total time taken to execute a particular process, including waiting time and
execution time.
Response Time: The time taken from the submission of a request until the first response is produced.
Short-term Scheduler: A scheduler that selects which process should be executed next and is also
known as the CPU scheduler.
CPU Utilization: The percentage of time the CPU is busy processing instructions.
Long-term Scheduler: A scheduler that selects processes from the job pool and loads them into
memory for execution.
Medium-term Scheduler: A scheduler that swaps processes in and out of memory to improve the
overall system performance.
When multiple processes or threads access shared resources concurrently, there is a risk of data corruption or
inconsistency. To address this issue, critical sections are used to ensure mutual exclusion, allowing only one
process to execute the critical code segment while others are blocked.
To implement a critical section, synchronization mechanisms like semaphores, mutexes, or locks are utilized.
These mechanisms help control access to shared resources and coordinate the execution of processes to avoid
conflicts.
For example, in C programming, critical sections can be implemented using mutexes from the pthread library:
language-c
Copy code
#include <pthread.h>
pthread_mutex_t mutex;
void* critical_section(void* arg) {
pthread_mutex_lock(&mutex);
// Critical section code
pthread_mutex_unlock(&mutex);
return NULL;
}
By properly defining and protecting critical sections, operating systems ensure that concurrent processes can
safely access shared resources without compromising data consistency.
In the Producer-Consumer scenario, multiple threads are involved: producers that add data to a shared
buffer and consumers that remove data from the same buffer. A race condition can arise when multiple
threads access the shared buffer concurrently without proper synchronization mechanisms in place.
For instance, consider a situation where a producer thread checks if the buffer is full before adding an item. If
another producer thread adds an item to the buffer in the meantime, the first producer may mistakenly add
an item to a full buffer, leading to data loss or corruption.
To prevent race conditions in the Producer-Consumer problem, synchronization techniques like mutexes,
semaphores, or monitors can be used. These mechanisms ensure that only one thread can access the shared
buffer at a time, preventing conflicts and maintaining data integrity.
Understanding race conditions in the context of the Producer-Consumer problem highlights the importance of
proper synchronization to avoid data inconsistencies and ensure the correct functioning of concurrent
programs.
14. List and define the three conditions needed to achieve synchronization. ( in
this you have to explain Mutual Exclusion, progress, bounded waiting)
:-In operating systems, achieving synchronization among processes is crucial to ensure proper coordination
and avoid conflicts. Three fundamental conditions must be met to achieve synchronization effectively:
Mutual Exclusion: This condition ensures that only one process at a time can access a shared resource
or critical section. By enforcing mutual exclusion, processes take turns accessing resources, preventing
simultaneous access that could lead to data corruption or inconsistencies. Implementing mutual
exclusion can be done using techniques like locks, semaphores, or mutexes.
Progress: Progress in synchronization means that processes should not be prevented from entering
their critical sections indefinitely. It ensures that if no process is executing in its critical section and
some processes wish to enter their critical sections, only those processes not executing in their
remainder section can participate in deciding which will enter next. This condition prevents deadlock
situations and ensures that processes can make progress towards completing their tasks.
Bounded Waiting: Bounded waiting guarantees that there is a limit on the number of times other
processes can enter their critical sections after a process has made a request to enter its critical
section. It prevents a process from being indefinitely postponed by other processes trying to access
shared resources. By setting a bound on the number of times a process can be bypassed, fairness is
maintained, and every process eventually gets a chance to access critical sections.
15. Discuss Peterson's solution and how it ensures mutual exclusion, progress
and bounded waiting.
:-Peterson's solution is a classic algorithm used to achieve mutual exclusion in concurrent systems. It utilizes
two variables: flag and turn, along with a clever use of busy waiting. The solution ensures mutual exclusion by
allowing only one process to enter the critical section at a time.
To guarantee progress, Peterson's solution uses the turn variable to enforce fairness, ensuring that processes
take turns entering the critical section. This prevents starvation and ensures that all processes eventually
make progress.
Bounded waiting is achieved in Peterson's solution by using the turn variable to limit the number of times a
process can enter the critical section before other processes get a chance. This prevents any process from being
blocked indefinitely, thus bounding the waiting time.
wait(S)
{
while (S<=0);
S--;
}
Binary Semaphores: Also known as mutex (mutual exclusion) semaphores, these have only two states:
0 and 1. They are used for controlling access to resources, ensuring only one process can access a
resource at a time.
Counting Semaphores: These semaphores can have values greater than 1, allowing multiple processes
to access a resource simultaneously. They are useful for scenarios where multiple instances of a
resource can be accessed concurrently.
18. Describe Down()/ P() and Up()/V() operations for counting and
binary semaphores.
:-In operating systems, semaphores are used for process synchronization. The Down() or P() operation
decrements the semaphore value. For counting semaphores, if the value is greater than zero, it decrements
the value; otherwise, it blocks the process until the value becomes positive.
Conversely, the Up() or V() operation increments the semaphore value. For counting semaphores, if there are
processes waiting due to a semaphore value of zero, it unblocks one of them. For binary semaphores, the value
is either 0 or 1, representing locked or unlocked states, respectively.
These operations are crucial for managing access to shared resources and ensuring mutual exclusion in
concurrent systems. Understanding their behavior is fundamental for designing efficient and deadlock-free
systems.
Here is a high-level overview of how Semaphores can be used to implement the Bounded Buffer Producer-
Consumer Problem:
1.
Initialize Semaphores:
Producer code
produce_item()
wait(empty) # Decrement empty slots
wait(mutex) # Enter critical section
insert_item_into_buffer()
signal(mutex) # Exit critical section
signal(full) # Increment filled slots
Consumer Code:
This approach effectively addresses the Bounded Buffer Producer-Consumer Problem by coordinating the
actions of producers and consumers using Semaphores in an efficient and synchronized manner.
A solution to this problem involves using semaphores to control access to the chopsticks. Each chopstick is
represented by a semaphore. Philosophers can only pick up both chopsticks if both semaphores are available. If
a philosopher cannot acquire both chopsticks, they release the first one and try again later. This approach
ensures that deadlocks and resource contention are avoided, allowing philosophers to dine peacefully.
2. Hold and Wait: Processes must hold resources while waiting for others.
4. Circular Wait: A circular chain of two or more processes exists, where each process is waiting for a
resource held by the next process in the chain.
Resource Allocation Graph: One common method is to use a resource allocation graph to detect and
prevent deadlocks. By representing processes as nodes and resource allocations as edges, the system
can analyze the graph to identify potential deadlocks and take corrective actions.
Resource Ordering: Another approach is to impose a strict ordering of resource requests. By requiring
processes to request resources in a predefined order, the system can prevent circular waits and
reduce the likelihood of deadlock.
Timeouts: Introducing timeouts for resource requests can also help prevent deadlock. If a process is
unable to acquire a resource within a specified time frame, it can release its current resources and
try again later, avoiding a potential deadlock situation.
Deadlock Avoidance: Using algorithms like Banker's Algorithm, the system can predict whether
allocating a resource to a process will lead to a safe state or a deadlock. By making informed
decisions about resource allocations, deadlock can be avoided proactively.
Deadlock Detection and Recovery: Implementing deadlock detection algorithms allows the system to
identify when a deadlock has occurred. Once detected, recovery mechanisms such as process
termination, resource preemption, or rollback can be employed to resolve the deadlock and restore
system functionality.
24. Explain the following allocation algorithms: 1) First-fit 2) Best-fit 3) Worst-fit
:-Allocation algorithms in operating systems play a crucial role in managing memory efficiently. Here's a brief
explanation of the three common allocation algorithms:
First-fit: This algorithm allocates the first available block of memory that is large enough to
accommodate the process. It starts searching from the beginning of the memory and stops at the
first block that fits the process size.
Best-fit: The best-fit algorithm allocates the smallest block of memory that is large enough to hold
the process. It searches the entire memory and selects the block that results in the smallest leftover
fragment.
Worst-fit: In contrast to best-fit, the worst-fit algorithm allocates the largest available block of
memory to the process. It aims to leave the largest fragment for future use.
25. What is Paging? What is Page Table? Explain the conversion of Virtual Address
to Physical Address in Paging with example.
:-Paging is a memory management scheme used in operating systems to store and retrieve data from
secondary storage into main memory. It divides the physical memory into fixed-size blocks called frames and
logical memory into blocks of the same size known as pages. The Page Table is a data structure that maps
virtual pages to physical frames, enabling the translation of virtual addresses to physical addresses.
To convert a Virtual Address to a Physical Address using Paging, the system uses the Page Table. It involves
extracting the page number and offset from the virtual address. The page number is used to index the Page
Table, retrieving the corresponding frame number. Finally, the offset is combined with the frame number to
form the Physical Address.
For example, let's consider a system with a page size of 4KB and a frame size of 4KB. If a virtual address is
12KB, the page number is 3 (12KB / 4KB) and the offset is 0KB. By looking up the Page Table entry for
page 3, if it maps to frame 5, the Physical Address would be 20KB (5 * 4KB).
26. What is fragmentation? Explain the difference between internal and external
fragmentation.
:-Fragmentation in operating systems refers to the phenomenon where memory or disk space becomes
divided into smaller blocks over time, leading to inefficiencies in resource utilization. Internal fragmentation
occurs when memory blocks allocated to processes are larger than required, resulting in wasted space within
those blocks. On the other hand, external fragmentation transpires when there are enough total memory
spaces available to satisfy a request, but they are not contiguous, causing inefficient memory allocation.
In essence, internal fragmentation is the wastage within allocated memory blocks, while external
fragmentation is the scattering of free memory space throughout the system, making it challenging to allocate
contiguous blocks of memory to processes efficiently. Understanding and managing fragmentation is crucial
for optimizing system performance and resource allocation.
27. What is demand paging? Explain the steps of Demand paging with a diagram.
:-Demand paging is a memory management scheme used in operating systems where pages are only brought
into memory when they are demanded by the program during execution. This approach helps in reducing the
amount of physical memory needed to run programs efficiently.
1. Page Fault: When a program accesses a page that is not in memory, a page fault occurs.
2. Find Free Frame: The operating system looks for a free frame in memory to load the demanded
page.
3. Swap Page In: If the demanded page is not in memory, the operating system swaps it in from the
disk to a free frame.
4. Update Page Table: The page table is updated to reflect the new location of the page in memory.
5. Resume Process: The process that caused the page fault is resumed, and the requested page is now
available in memory.
[Program] -> [Page Fault] -> [Find Free Frame] -> [Swap Page In] -> [Update Page Table] ->
[Resume Process]
28. Consider a system where the main memory access time is 100ns and the TLB
access time is 20ns. The hit ratio is 95%. What is an effective memory access
time with and without TLB?
:-The effective memory access time can be calculated using the formula:
Effective Memory Access Time = TLB Hit Ratio * (TLB Access Time + Main Memory Access Time) + (1 - TLB
Hit Ratio) * Main Memory Access Time
Given:
With TLB:
Effective Memory Access Time = 0.95 * (20 + 100) + 0.05 * 100 = 115ns
Without TLB:
Effective Memory Access Time = 1 * 100 = 100ns
Therefore, the effective memory access time with TLB is 115ns, while without TLB, it is 100ns.
29. Consider a system where Main Memory access time is 30 ns. Page fault
service time is 300 ns. Page hit ration is 85%. What is Effective Memory
Access Time(EMAT)?
:-To calculate the Effective Memory Access Time (EMAT), we can use the formula:
[ EMAT = (1 - \text{Page Hit Ratio}) \times (\text{Page Fault Service Time} + \text{Main Memory Access
Time}) + \text{Page Hit Ratio} \times \text{Main Memory Access Time} ]
Given:
[ EMAT = 75 ns ]
1. Sequential File Organization: Data is stored in a sequential order, suitable for batch processing.
2. Indexed File Organization: Indexes are used to access records quickly, enhancing search performance.
3. Hashed File Organization: Hash functions map keys to addresses, enabling direct access to data.
4. Clustered File Organization: Data with similar characteristics is stored together, reducing seek time.
5. Distributed File Organization: Data is spread across multiple locations, enhancing parallel access.
Each technique has its strengths and weaknesses, making it essential to choose the most suitable one based on
the specific requirements of the system.
31. Explain the concept of Direct Memory Access (DMA) and its significance in I/O.
:-Direct Memory Access (DMA) is a technique used in computer systems to allow certain hardware subsystems
to access system memory independently of the CPU. In Input/Output (I/O) operations, DMA plays a crucial
role in enhancing system performance by offloading data transfer tasks from the CPU to dedicated DMA
controllers.
When a device needs to transfer data to or from memory, traditionally, the CPU would be involved in
managing these data transfers. However, this process can be inefficient and time-consuming, especially for
large data transfers. Here is where DMA comes into play.
DMA enables devices like network cards, sound cards, or storage controllers to directly transfer data to and
from memory without CPU intervention. This significantly reduces the burden on the CPU, allowing it to focus
on other tasks while data transfers occur in the background.
Reduced CPU Overhead: Offloading data transfer tasks to DMA controllers frees up the CPU to
handle more critical tasks, improving system efficiency and multitasking capabilities.
Efficient Data Transfers: DMA allows for efficient block data transfers, reducing latency and
improving the overall responsiveness of the system.
Enhanced Throughput: With DMA, multiple devices can perform data transfers simultaneously,
increasing overall system throughput and reducing bottlenecks.
In conclusion, DMA is a vital mechanism in modern computer systems that optimizes data transfer processes,
enhances system performance, and improves the efficiency of I/O operations by reducing CPU involvement in
data transfers.
33. Suppose Disk drive has 300 cylinders. The current position of the head is 90.
The queue of pending request is 36,79,15,120,199,270,89,170. Calculate head
movement for the following algorithms.
1. FCFS 2. SSTF 3. SCAN 4. C-SCAN
• Request queue: 36, 79, 15, 120, 199, 270, 89, 170
Head movements:
• Request queue: 36, 79, 15, 120, 199, 270, 89, 170
Head movements:
• Request queue: 36, 79, 15, 120, 199, 270, 89, 170
2. Move from 15 to 0: 15