Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Disclaimer: The provided question bank contains ONLY IMPORTANT

questions for exam preparation. Please note that the coverage of the
entire syllabus is recommended and necessary.

Operating System Question Bank

1. Give the Difference between Multi-Programming, Multi-tasking,


Multiprocessing Systems.
Multi-Programming: In a multi-programming system, multiple programs are loaded into memory
simultaneously, and the CPU switches between them to increase CPU utilization. It aims to reduce
CPU idle time by keeping it busy with other tasks when one task is waiting.

Multi-Tasking: Multi-tasking allows multiple tasks or processes to run concurrently on a single CPU.
The CPU switches rapidly between tasks, giving users the illusion of parallel execution. Each task gets
a slice of CPU time to execute.

Multiprocessing: In a multiprocessing system, multiple CPUs work together to execute multiple tasks
simultaneously. Each CPU has its own memory and resources, enabling true parallel processing. Tasks
can be divided among CPUs for efficient execution.

2. Explain different types of OS in detail


3. What is Kernel? Differentiate between Monolithic Kernel and Micro Kernel.
:-when cpu is in kernel mode code being execute can access any memory address and
any hardware resource.
-it is very preveliged & powerful mode.
-if program crashes in kernel mode the entire system will be halted.

4. Explain different service provided by operating system.


:-
5. What is system call? Explain steps for system call execution.

6. What is Process? Draw the Five State Process Model and Explain it.
:-In operating systems, a process represents a program in execution. The Five State Process Model illustrates
the various states a process can be in during its lifecycle. These states include New, Ready, Running, Waiting,
and Terminated.
1. New: The process is being created.

2. Ready: The process is ready to run and waiting for the CPU.

3. Running: The process is currently being executed.

4. Waiting: The process is waiting for a certain event to occur.

5. Terminated: The process has finished execution.

7. Give the Difference between Thread and Process.

:-

8. Write the difference between the user-level thread and the Kernal-level thread.

10. Define the following terms.

Context switching, Dispatcher, Throughput, Waiting Time, Turnaround Time,


Response Time, Short Term Scheduler, CPU Utilization, Long term
scheduler, Short term scheduler, Medium term schedular.
Context Switching: The process of storing and restoring the state of a CPU so that multiple processes
can share a single CPU.
Dispatcher: A module that gives control of the CPU to the process selected by the short-term
scheduler.

Throughput: The total amount of work done in a given period.

Waiting Time: The total time a process spends waiting in the ready queue.

Turnaround Time: The total time taken to execute a particular process, including waiting time and
execution time.

Response Time: The time taken from the submission of a request until the first response is produced.

Short-term Scheduler: A scheduler that selects which process should be executed next and is also
known as the CPU scheduler.

CPU Utilization: The percentage of time the CPU is busy processing instructions.

Long-term Scheduler: A scheduler that selects processes from the job pool and loads them into
memory for execution.

Medium-term Scheduler: A scheduler that swaps processes in and out of memory to improve the
overall system performance.

11. Define: Mutual Exclusion

Mutual exclusion is a synchronization technique used in concurrent programming to


ensure that only one process or thread can access a shared resource at a time. It
prevents concurrent access to critical sections of code, thereby avoiding race
conditions and maintaining data integrity.
12. Write a note on Critical section.
;-In operating systems, a critical section refers to a segment of code that must be executed atomically. This
means that only one process or thread can execute the critical section at a time to prevent race conditions
and maintain data integrity.

When multiple processes or threads access shared resources concurrently, there is a risk of data corruption or
inconsistency. To address this issue, critical sections are used to ensure mutual exclusion, allowing only one
process to execute the critical code segment while others are blocked.

To implement a critical section, synchronization mechanisms like semaphores, mutexes, or locks are utilized.
These mechanisms help control access to shared resources and coordinate the execution of processes to avoid
conflicts.

For example, in C programming, critical sections can be implemented using mutexes from the pthread library:

language-c
Copy code
#include <pthread.h>
pthread_mutex_t mutex;
void* critical_section(void* arg) {
pthread_mutex_lock(&mutex);
// Critical section code
pthread_mutex_unlock(&mutex);
return NULL;
}
By properly defining and protecting critical sections, operating systems ensure that concurrent processes can
safely access shared resources without compromising data consistency.

13. Explain race condition with the help of Producer-Consumer problem.


:-In the context of concurrent programming, a race condition occurs when the outcome of a program
depends on the sequence or timing of uncontrollable events. The Producer-Consumer problem is a classic
example that illustrates race conditions in operating systems.

In the Producer-Consumer scenario, multiple threads are involved: producers that add data to a shared
buffer and consumers that remove data from the same buffer. A race condition can arise when multiple
threads access the shared buffer concurrently without proper synchronization mechanisms in place.

For instance, consider a situation where a producer thread checks if the buffer is full before adding an item. If
another producer thread adds an item to the buffer in the meantime, the first producer may mistakenly add
an item to a full buffer, leading to data loss or corruption.

To prevent race conditions in the Producer-Consumer problem, synchronization techniques like mutexes,
semaphores, or monitors can be used. These mechanisms ensure that only one thread can access the shared
buffer at a time, preventing conflicts and maintaining data integrity.
Understanding race conditions in the context of the Producer-Consumer problem highlights the importance of
proper synchronization to avoid data inconsistencies and ensure the correct functioning of concurrent
programs.

14. List and define the three conditions needed to achieve synchronization. ( in
this you have to explain Mutual Exclusion, progress, bounded waiting)
:-In operating systems, achieving synchronization among processes is crucial to ensure proper coordination
and avoid conflicts. Three fundamental conditions must be met to achieve synchronization effectively:

Mutual Exclusion: This condition ensures that only one process at a time can access a shared resource
or critical section. By enforcing mutual exclusion, processes take turns accessing resources, preventing
simultaneous access that could lead to data corruption or inconsistencies. Implementing mutual
exclusion can be done using techniques like locks, semaphores, or mutexes.

Progress: Progress in synchronization means that processes should not be prevented from entering
their critical sections indefinitely. It ensures that if no process is executing in its critical section and
some processes wish to enter their critical sections, only those processes not executing in their
remainder section can participate in deciding which will enter next. This condition prevents deadlock
situations and ensures that processes can make progress towards completing their tasks.

Bounded Waiting: Bounded waiting guarantees that there is a limit on the number of times other
processes can enter their critical sections after a process has made a request to enter its critical
section. It prevents a process from being indefinitely postponed by other processes trying to access
shared resources. By setting a bound on the number of times a process can be bypassed, fairness is
maintained, and every process eventually gets a chance to access critical sections.

15. Discuss Peterson's solution and how it ensures mutual exclusion, progress
and bounded waiting.
:-Peterson's solution is a classic algorithm used to achieve mutual exclusion in concurrent systems. It utilizes
two variables: flag and turn, along with a clever use of busy waiting. The solution ensures mutual exclusion by
allowing only one process to enter the critical section at a time.

To guarantee progress, Peterson's solution uses the turn variable to enforce fairness, ensuring that processes
take turns entering the critical section. This prevents starvation and ensures that all processes eventually
make progress.

Bounded waiting is achieved in Peterson's solution by using the turn variable to limit the number of times a
process can enter the critical section before other processes get a chance. This prevents any process from being
blocked indefinitely, thus bounding the waiting time.

16. What is semaphore?


:-Semaphores are integer variables that are used to solve the critical section problem
by using two atomic operations, wait() and signal() that are used for process
synchronization.
The definitions of wait() and signal() are as follows
Wait:-The wait operation decrements the value of its argument S, if it is positive. If S
is negative or zero, then no operation is performed.

wait(S)
{
while (S<=0);
S--;
}

Signal:- The signal operation increments the value of its argument S.


signal(S)
{ S++;
}

17. Explain the types of the semaphore.


:-In operating systems, there are two main types of semaphores: Binary Semaphores and Counting
Semaphores.

Binary Semaphores: Also known as mutex (mutual exclusion) semaphores, these have only two states:
0 and 1. They are used for controlling access to resources, ensuring only one process can access a
resource at a time.

Counting Semaphores: These semaphores can have values greater than 1, allowing multiple processes
to access a resource simultaneously. They are useful for scenarios where multiple instances of a
resource can be accessed concurrently.

18. Describe Down()/ P() and Up()/V() operations for counting and
binary semaphores.
:-In operating systems, semaphores are used for process synchronization. The Down() or P() operation
decrements the semaphore value. For counting semaphores, if the value is greater than zero, it decrements
the value; otherwise, it blocks the process until the value becomes positive.

Conversely, the Up() or V() operation increments the semaphore value. For counting semaphores, if there are
processes waiting due to a semaphore value of zero, it unblocks one of them. For binary semaphores, the value
is either 0 or 1, representing locked or unlocked states, respectively.

These operations are crucial for managing access to shared resources and ensuring mutual exclusion in
concurrent systems. Understanding their behavior is fundamental for designing efficient and deadlock-free
systems.

19. Explain the Bounded buffer producer-consumer problem using Semaphore.


:-The Bounded Buffer Producer-Consumer Problem is a classic synchronization issue in operating systems
where multiple threads (producers and consumers) share a fixed-size buffer. The challenge lies in ensuring that
producers do not produce data when the buffer is full and consumers do not consume data when the buffer is
empty.
To solve this problem using Semaphores, we can use two Semaphores: empty and full, along with a Mutex
(binary Semaphore) to control access to the buffer.

Here is a high-level overview of how Semaphores can be used to implement the Bounded Buffer Producer-
Consumer Problem:

1.
Initialize Semaphores:

1. empty Semaphore initialized to the buffer size (number of empty slots).

2. full Semaphore initialized to 0 (number of filled slots).

3. Mutex Semaphore to control buffer access.

Producer code

produce_item()
wait(empty) # Decrement empty slots
wait(mutex) # Enter critical section
insert_item_into_buffer()
signal(mutex) # Exit critical section
signal(full) # Increment filled slots
Consumer Code:

wait(full) # Decrement filled slots


wait(mutex) # Enter critical section
remove_item_from_buffer()
signal(mutex) # Exit critical section
signal(empty) # Increment empty slots
consume_item()
By using Semaphores to control the synchronization between producers and consumers, we ensure that
producers wait when the buffer is full and consumers wait when the buffer is empty. The Mutex Semaphore
guarantees exclusive access to the buffer to prevent race conditions.

This approach effectively addresses the Bounded Buffer Producer-Consumer Problem by coordinating the
actions of producers and consumers using Semaphores in an efficient and synchronized manner.

20. Explain the dining philosopher problem. Provide a solution to the


dining philosopher problem using semaphores.
:-The dining philosopher problem is a classic synchronization problem where a group of philosophers sits at a
table with a bowl of rice and chopsticks. The philosophers alternate between thinking and eating. To eat, a
philosopher needs two chopsticks, one on the left and one on the right. If a philosopher picks up one chopstick
but cannot acquire the other, they must put down the first chopstick and wait.

A solution to this problem involves using semaphores to control access to the chopsticks. Each chopstick is
represented by a semaphore. Philosophers can only pick up both chopsticks if both semaphores are available. If
a philosopher cannot acquire both chopsticks, they release the first one and try again later. This approach
ensures that deadlocks and resource contention are avoided, allowing philosophers to dine peacefully.

21. Define deadlock in the context of operating systems. Explain the


conditions necessary for deadlock to occur.
:-Deadlock in operating systems refers to a situation where two or more processes are unable to proceed
because each is waiting for the other to release a resource. For deadlock to occur, four conditions must be
present simultaneously:

1. Mutual Exclusion: Resources requested by processes must be non-sharable.

2. Hold and Wait: Processes must hold resources while waiting for others.

3. No Preemption: Resources cannot be forcibly taken from a process.

4. Circular Wait: A circular chain of two or more processes exists, where each process is waiting for a
resource held by the next process in the chain.

22. How can we prevent the deadlock? Explain in detail.


:-Deadlock is a situation in which two or more processes are unable to proceed because each is waiting for the
other to release a resource. To prevent deadlock in operating systems, several strategies can be employed:

Resource Allocation Graph: One common method is to use a resource allocation graph to detect and
prevent deadlocks. By representing processes as nodes and resource allocations as edges, the system
can analyze the graph to identify potential deadlocks and take corrective actions.

Resource Ordering: Another approach is to impose a strict ordering of resource requests. By requiring
processes to request resources in a predefined order, the system can prevent circular waits and
reduce the likelihood of deadlock.

Timeouts: Introducing timeouts for resource requests can also help prevent deadlock. If a process is
unable to acquire a resource within a specified time frame, it can release its current resources and
try again later, avoiding a potential deadlock situation.

Deadlock Avoidance: Using algorithms like Banker's Algorithm, the system can predict whether
allocating a resource to a process will lead to a safe state or a deadlock. By making informed
decisions about resource allocations, deadlock can be avoided proactively.

Deadlock Detection and Recovery: Implementing deadlock detection algorithms allows the system to
identify when a deadlock has occurred. Once detected, recovery mechanisms such as process
termination, resource preemption, or rollback can be employed to resolve the deadlock and restore
system functionality.
24. Explain the following allocation algorithms: 1) First-fit 2) Best-fit 3) Worst-fit

:-Allocation algorithms in operating systems play a crucial role in managing memory efficiently. Here's a brief
explanation of the three common allocation algorithms:

First-fit: This algorithm allocates the first available block of memory that is large enough to
accommodate the process. It starts searching from the beginning of the memory and stops at the
first block that fits the process size.

Best-fit: The best-fit algorithm allocates the smallest block of memory that is large enough to hold
the process. It searches the entire memory and selects the block that results in the smallest leftover
fragment.

Worst-fit: In contrast to best-fit, the worst-fit algorithm allocates the largest available block of
memory to the process. It aims to leave the largest fragment for future use.
25. What is Paging? What is Page Table? Explain the conversion of Virtual Address
to Physical Address in Paging with example.
:-Paging is a memory management scheme used in operating systems to store and retrieve data from
secondary storage into main memory. It divides the physical memory into fixed-size blocks called frames and
logical memory into blocks of the same size known as pages. The Page Table is a data structure that maps
virtual pages to physical frames, enabling the translation of virtual addresses to physical addresses.

To convert a Virtual Address to a Physical Address using Paging, the system uses the Page Table. It involves
extracting the page number and offset from the virtual address. The page number is used to index the Page
Table, retrieving the corresponding frame number. Finally, the offset is combined with the frame number to
form the Physical Address.

For example, let's consider a system with a page size of 4KB and a frame size of 4KB. If a virtual address is
12KB, the page number is 3 (12KB / 4KB) and the offset is 0KB. By looking up the Page Table entry for
page 3, if it maps to frame 5, the Physical Address would be 20KB (5 * 4KB).

26. What is fragmentation? Explain the difference between internal and external
fragmentation.
:-Fragmentation in operating systems refers to the phenomenon where memory or disk space becomes
divided into smaller blocks over time, leading to inefficiencies in resource utilization. Internal fragmentation
occurs when memory blocks allocated to processes are larger than required, resulting in wasted space within
those blocks. On the other hand, external fragmentation transpires when there are enough total memory
spaces available to satisfy a request, but they are not contiguous, causing inefficient memory allocation.

In essence, internal fragmentation is the wastage within allocated memory blocks, while external
fragmentation is the scattering of free memory space throughout the system, making it challenging to allocate
contiguous blocks of memory to processes efficiently. Understanding and managing fragmentation is crucial
for optimizing system performance and resource allocation.

27. What is demand paging? Explain the steps of Demand paging with a diagram.
:-Demand paging is a memory management scheme used in operating systems where pages are only brought
into memory when they are demanded by the program during execution. This approach helps in reducing the
amount of physical memory needed to run programs efficiently.

The steps involved in demand paging are as follows:

1. Page Fault: When a program accesses a page that is not in memory, a page fault occurs.

2. Find Free Frame: The operating system looks for a free frame in memory to load the demanded
page.

3. Swap Page In: If the demanded page is not in memory, the operating system swaps it in from the
disk to a free frame.

4. Update Page Table: The page table is updated to reflect the new location of the page in memory.
5. Resume Process: The process that caused the page fault is resumed, and the requested page is now
available in memory.

Here is a simple diagram illustrating the demand paging process:

[Program] -> [Page Fault] -> [Find Free Frame] -> [Swap Page In] -> [Update Page Table] ->
[Resume Process]

28. Consider a system where the main memory access time is 100ns and the TLB
access time is 20ns. The hit ratio is 95%. What is an effective memory access
time with and without TLB?
:-The effective memory access time can be calculated using the formula:

Effective Memory Access Time = TLB Hit Ratio * (TLB Access Time + Main Memory Access Time) + (1 - TLB
Hit Ratio) * Main Memory Access Time

Given:

• Main Memory Access Time = 100ns

• TLB Access Time = 20ns

• TLB Hit Ratio = 95% = 0.95

With TLB:
Effective Memory Access Time = 0.95 * (20 + 100) + 0.05 * 100 = 115ns

Without TLB:
Effective Memory Access Time = 1 * 100 = 100ns

Therefore, the effective memory access time with TLB is 115ns, while without TLB, it is 100ns.

29. Consider a system where Main Memory access time is 30 ns. Page fault
service time is 300 ns. Page hit ration is 85%. What is Effective Memory
Access Time(EMAT)?
:-To calculate the Effective Memory Access Time (EMAT), we can use the formula:

[ EMAT = (1 - \text{Page Hit Ratio}) \times (\text{Page Fault Service Time} + \text{Main Memory Access
Time}) + \text{Page Hit Ratio} \times \text{Main Memory Access Time} ]

Given:

• Main Memory Access Time = 30 ns

• Page Fault Service Time = 300 ns


• Page Hit Ratio = 85%

Substitute the values into the formula:

[ EMAT = (1 - 0.85) \times (300 + 30) + 0.85 \times 30 ]

[ EMAT = 0.15 \times 330 + 0.85 \times 30 ]

[ EMAT = 49.5 + 25.5 ]

[ EMAT = 75 ns ]

Therefore, the Effective Memory Access Time (EMAT) is 75 nanoseconds.

30. Describe various file organization techniques.


:-File organization techniques are crucial in managing data efficiently within an operating system. Some
common techniques include:

1. Sequential File Organization: Data is stored in a sequential order, suitable for batch processing.

2. Indexed File Organization: Indexes are used to access records quickly, enhancing search performance.

3. Hashed File Organization: Hash functions map keys to addresses, enabling direct access to data.

4. Clustered File Organization: Data with similar characteristics is stored together, reducing seek time.

5. Distributed File Organization: Data is spread across multiple locations, enhancing parallel access.

Each technique has its strengths and weaknesses, making it essential to choose the most suitable one based on
the specific requirements of the system.

31. Explain the concept of Direct Memory Access (DMA) and its significance in I/O.
:-Direct Memory Access (DMA) is a technique used in computer systems to allow certain hardware subsystems
to access system memory independently of the CPU. In Input/Output (I/O) operations, DMA plays a crucial
role in enhancing system performance by offloading data transfer tasks from the CPU to dedicated DMA
controllers.

When a device needs to transfer data to or from memory, traditionally, the CPU would be involved in
managing these data transfers. However, this process can be inefficient and time-consuming, especially for
large data transfers. Here is where DMA comes into play.

DMA enables devices like network cards, sound cards, or storage controllers to directly transfer data to and
from memory without CPU intervention. This significantly reduces the burden on the CPU, allowing it to focus
on other tasks while data transfers occur in the background.

The significance of DMA in I/O operations includes:


Improved Performance: By bypassing the CPU for data transfer tasks, DMA can significantly speed
up I/O operations, leading to faster data transfers and overall system performance.

Reduced CPU Overhead: Offloading data transfer tasks to DMA controllers frees up the CPU to
handle more critical tasks, improving system efficiency and multitasking capabilities.

Efficient Data Transfers: DMA allows for efficient block data transfers, reducing latency and
improving the overall responsiveness of the system.

Enhanced Throughput: With DMA, multiple devices can perform data transfers simultaneously,
increasing overall system throughput and reducing bottlenecks.

In conclusion, DMA is a vital mechanism in modern computer systems that optimizes data transfer processes,
enhances system performance, and improves the efficiency of I/O operations by reducing CPU involvement in
data transfers.

32. Discuss the structure of a disk and its components.

:-Answer #1: Disk Structure Overview


A disk in an operating system consists of several key components. At its core, a disk comprises platters coated
with a magnetic material where data is stored. These platters spin at high speeds, and data is read from or
written to them using read/write heads. The disk is divided into tracks, sectors, and clusters to organize data
efficiently. Tracks are concentric circles on a platter, sectors are pie-shaped divisions of tracks, and clusters
are groups of sectors. Understanding this hierarchical structure is crucial for efficient data storage and
retrieval.

Answer #2: Components of a Disk


When we talk about the components of a disk in an operating system, we must consider the physical and
logical elements that make up the storage device. The physical components include the platters, read/write
heads, spindle, and motor. These components work together to store and retrieve data. On the other hand,
the logical components involve how data is organized on the disk. This includes the file system, which manages
how data is stored, accessed, and manipulated. By understanding both the physical and logical components,
we can comprehend how data is managed on a disk within an operating system.

33. Suppose Disk drive has 300 cylinders. The current position of the head is 90.
The queue of pending request is 36,79,15,120,199,270,89,170. Calculate head
movement for the following algorithms.
1. FCFS 2. SSTF 3. SCAN 4. C-SCAN

:-Answer #1: FCFS (First-Come, First-Served)


For FCFS, the head moves based on the order of the requests in the queue. Calculating the head movements:

• Initial head position: 90

• Request queue: 36, 79, 15, 120, 199, 270, 89, 170

Head movements:

1. Move from 90 to 36: 54

2. Move from 36 to 79: 43

3. Move from 79 to 15: 64

4. Move from 15 to 120: 105

5. Move from 120 to 199: 79

6. Move from 199 to 270: 71

7. Move from 270 to 89: 181

8. Move from 89 to 170: 81

Total head movement for FCFS: 54 + 43 + 64 + 105 + 79 + 71 + 181 + 81 = 678

Answer #2: SSTF (Shortest Seek Time First)


SSTF selects the request with the shortest seek time from the current head position. Calculating the head
movements:

• Initial head position: 90

• Request queue: 36, 79, 15, 120, 199, 270, 89, 170

Head movements:

1. Move from 90 to 89: 1

2. Move from 89 to 79: 10

3. Move from 79 to 36: 43

4. Move from 36 to 15: 21

5. Move from 15 to 170: 155

6. Move from 170 to 199: 29

7. Move from 199 to 120: 79

8. Move from 120 to 270: 150

Total head movement for SSTF: 1 + 10 + 43 + 21 + 155 + 29 + 79 + 150 = 488

Answer #3: SCAN and C-SCAN


For SCAN and C-SCAN, the head moves in one direction servicing requests until the end and then reverses
direction. Calculating the head movements for both algorithms:

• Initial head position: 90

• Request queue: 36, 79, 15, 120, 199, 270, 89, 170

SCAN (Elevator) Algorithm:

1. Move from 90 to 15: 75

2. Move from 15 to 170: 155

3. Move from 170 to 199: 29

4. Move from 199 to 270: 71

Total head movement for SCAN: 75 + 155 + 29 + 71 = 330

C-SCAN (Circular SCAN) Algorithm:

1. Move from 90 to 15: 75

2. Move from 15 to 0: 15

3. Move from 0 to 270: 270

Total head movement for C-SCAN: 75 + 15 + 270 = 360

You might also like