Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Prince Carl S.

Ajoc BSCS -2A1

Applied Operating System


Module No. 1
Assessment:
Instructions: Choose the best answer. Write your answers in a 1/8 sheet of paper and submit it during
the face to face schedule.

1. b. False
2. b. Batch systems with spooling
3. c. Multi-tasking system
4. d. Network operating system

Self-Evaluation
Answer the following:
1. What are the major issues considered when the internal structure of operating system is designed?
Answer: When designing the internal structure of an operating system, major issues considered
include resource management, concurrency control, security, abstraction, performance, scalability,
fault tolerance, and extensibility.

2. What is the worst-case scenario of the monolithic structure of operating system?


Answer: The worst-case scenario of the monolithic structure of an operating system involves a
single point of failure, lack of modularity making maintenance difficult, and limited scalability due
to tight coupling between components.
3. Discuss the different layers of layered operating system structure.
Answer: Layered operating system structures consist of the hardware layer, kernel layer providing
core services, system call interface layer enabling user-level requests, user interface layer for
interaction, and application layer hosting user applications.
4. What is a microkernel?
Answer: A microkernel is a minimalist kernel design with essential core functionalities
implemented, while additional services traditionally included in monolithic kernels are
implemented as user-space processes or servers outside the kernel's privileged mode.
5. List at least four services of the operating system?
Answer: Services of the operating system encompass process management, memory management,
file system management, and device management, providing functions such as process scheduling,
memory allocation, file operations, and device control.
Prince Carl S. Ajoc BSCS -2A1

Module No. 2
Assessment 1:
Instructions: Answer the following. Write your answers in a ½ sheet of paper and submit it during the face
to face schedule.
1. What are the different possible states that a process can be in?
Answer:

New: The process is being created.


Ready: The process is waiting to be assigned to a processor.
Running: The process is being executed.
Waiting/Blocked/Sleeping: The process is waiting for a particular event (such as I/O
completion) to occur.
 Terminated: The process has finished execution.
2. When does a process move from running to waiting/blocked/sleeping state?
Answer: It needs to wait for an external event such as input/output operation completion or a
resource allocation.
3. Why is it not possible to move a process from ready to waiting state?
Answer: Processes in the ready state are already prepared to execute and are awaiting CPU time.
They are not currently waiting for any specific event or resource.
4. What is the reason for a process to stay in its ready state?
Answer: It is waiting for the CPU to become available for execution. When the CPU becomes
available, the process can be scheduled for execution and moved to the running state.
Prince Carl S. Ajoc BSCS -2A1

Assessment 2:
Instructions: Answer the following. Write your answers in a ½ sheet of paper and submit it during the face
to face schedule.
1. What is the importance of using threads?
Answer:

 Threads allow for concurrent execution within a process, enabling multitasking and
improving responsiveness in applications.
 They facilitate parallelism, making better use of multiple CPU cores and enhancing overall
system performance.
 Threads enable efficient resource utilization by sharing memory space, reducing overhead
compared to processes.
2. What are the main differences between user space and kernel threads?
Answer:
 User Space Threads: Managed entirely by user-level libraries or applications without
kernel intervention. Context switching occurs within the user space, making them
lightweight. However, they cannot take advantage of multiprocessor architectures
efficiently.
 Kernel Threads: Managed by the operating system kernel. They have kernel-level support
for scheduling and synchronization, making them more robust but potentially less efficient
due to higher overhead.
3. What are the disadvantages of kernel level thread implementation?
Answer:
 Kernel-level threads have higher overhead due to frequent context switches between user
and kernel modes.
 Synchronization mechanisms like mutexes or semaphores may involve costly system calls.
 Scaling may become an issue with a large number of kernel threads due to increased kernel
memory and scheduling overhead.
4. Which problems of user space thread implementation are solved by using the scheduler activation
thread implementation method?

Answer:
 Blocking System Calls: In traditional user space thread models, blocking system calls can
lead to the entire process being blocked. Scheduler activation allows the kernel to be aware
of thread blocking, enabling more efficient handling of blocking operations.
 Performance: By involving the kernel in thread scheduling decisions, scheduler activation
can improve overall system performance, especially in scenarios involving I/O-bound or
blocking operations.
Prince Carl S. Ajoc BSCS -2A1

Assessment 3:

Instructions: Answer the following. Write your answers in a ½ sheet of paper and submit it during the face
to face schedule.
1. Discuss the problems of concurrency.
Answer:

 Race Conditions: Occur when the outcome of a program depends on the sequence or
timing of uncontrollable events.
 Deadlocks: Situations where two or more processes are unable to proceed because each is
waiting for the other to release a resource.
 Starvation: A process is prevented from making progress because it cannot acquire
required resources, often due to unfair resource allocation.
 Priority Inversion: Lower-priority tasks holding resources needed by higher-priority
tasks, causing delays in critical operations.
2. What kind of solution is test-and-set lock (TSL) provide?
Answer:

 Test-and-set lock provides a solution for implementing mutual exclusion in concurrent


systems.
 It guarantees that only one thread can execute a critical section of code at a time by
atomically testing a memory location and setting it to a specified value if it meets certain
criteria, typically used for synchronization purposes.
3. What is the difference between semaphores and monitors?
Answer:

Semaphores: A low-level synchronization mechanism that controls access to a shared


resource using two operations: wait (P) and signal (V). It can be used for signaling between
processes and ensuring mutual exclusion.
 Monitors: A higher-level synchronization construct that encapsulates shared data and
operations on that data within a single module. It provides more structured and safer
synchronization by allowing only one process to execute the monitor procedures at a time.
4. How does deadlock happen in a system?

Answer:
 Deadlock occurs when two or more processes are waiting indefinitely for resources held
by each other, resulting in a circular wait condition.
 Four necessary conditions for deadlock are mutual exclusion, hold and wait, no
preemption, and circular wait.
 Deadlocks can arise in systems where resources are not properly managed and allocated,
leading to a situation where none of the processes can proceed.
Prince Carl S. Ajoc BSCS -2A1

Assessment 4:

Instructions: Answer the following. Write your answers in a ½ sheet of paper and submit it during the face
to face schedule.
1. Define the differences between pre-emptive and non-pre-emptive scheduling.
Answer:

 Pre-emptive Scheduling: In pre-emptive scheduling, the operating system can interrupt


a process currently executing on the CPU and allocate the CPU to another process. This
interruption can occur due to higher-priority processes becoming ready to execute or
when the current process exceeds its time quantum.
 Non-pre-emptive Scheduling: In non-pre-emptive scheduling, once a process starts
executing on the CPU, it continues until it completes its execution or voluntarily yields the
CPU. The operating system does not interrupt the process during its execution.
2. Explain the differences in the degree to which the following scheduling algorithms discriminate in
favor of short processes: FCFS, RR.
Answer:
 FCFS (First-Come, First-Served): FCFS does not discriminate in favor of short
processes. It schedules processes based on the order in which they arrive in the ready queue.
Therefore, longer processes may experience longer waiting times if shorter processes arrive
first.
 RR (Round-Robin): RR scheduling, which is a pre-emptive variant of FCFS, also does
not discriminate in favor of short processes in terms of initial scheduling. However, it
ensures fairness by providing each process with a small unit of CPU time called a time
quantum. Short processes can complete within their time quantum, while longer processes
may require multiple time quanta to complete, but they are still given equal opportunities
to execute. Therefore, RR scheduling tends to be fairer to shorter processes compared to
FCFS.
Prince Carl S. Ajoc BSCS -2A1

Assessment 5:

Instructions: Answer the following. Write your answers in a ½ sheet of paper and submit it during the face
to face schedule.
1. Give a practical example how dead lock occurs.
Answer: Imagine a scenario where two processes, Process A and Process B, both need access to
two resources, Resource 1 and Resource 2, to complete their tasks. Process A acquires Resource 1
and then attempts to acquire Resource 2. At the same time, Process B has acquired Resource 2 and
is trying to acquire Resource 1. Both processes are now waiting for a resource held by the other,
resulting in a deadlock situation where neither can proceed.
2. Demonstrate the example that you provided on (1) using resource allocation graph.
Answer:

 In the resource allocation graph, nodes represent processes and resources, while edges
represent resource requests and allocations.
 In the given example, there will be nodes for Process A, Process B, Resource 1, and
Resource 2.
 Process A will have an edge to Resource 1 (request) and Process B will have an edge to
Resource 2 (request).
 Resource 1 will have an edge to Process A (allocation) and Resource 2 will have an edge
to Process B (allocation).
 This configuration will result in a cycle in the resource allocation graph, indicating a
deadlock.
3. What are the requirement for deadlock to occur?
Answer:

 Mutual Exclusion: At least one resource must be held in a non-sharable mode, meaning
only one process can use it at a time.
 Hold and Wait: Processes currently holding resources may request additional resources
while still holding the original resources.
 No Preemption: Resources cannot be forcibly taken away from a process; they must be
released voluntarily.
 Circular Wait: A set of waiting processes must exist such that each process is waiting for
a resource held by another process in the set, creating a circular chain of dependencies.
Prince Carl S. Ajoc BSCS -2A1

Self-Evaluation
Answer the following:
1. What is deadlock? How can it occur?
Answer: Deadlock is a situation in which two or more processes are unable to proceed because
each is waiting for the other to release a resource. It occurs when four conditions are met
simultaneously: mutual exclusion, hold and wait, no preemption, and circular wait.

 Example of how deadlock can occur: Imagine two processes, A and B, each holding one
resource and waiting for the other resource held by the other process. If neither process
releases its held resource, a deadlock situation arises.
2. Give an example of an interrupt request. Explain when will it occur?
Answer: Consider a scenario where a process is performing a long calculation, and a user presses
a key on the keyboard to interrupt the calculation and provide input.

 When it occurs: Interrupt requests occur when external events or hardware signals need
attention from the CPU. In the example given, the keyboard generates an interrupt request
when a key is pressed, causing the CPU to temporarily suspend the current process, handle
the interrupt, and execute an interrupt service routine to process the keyboard input.

You might also like