Assignment No1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Department of Computer Engineering

Assignment no 1(solution)
1) Differentiate between monolithic and microkernel.
BASIS FOR
MICROKERNEL MONOLITHIC KERNEL
COMPARISON

Basic In microkernel user services and kernel, In monolithic kernel, both user services and kernel

services are kept in separate address space. services are kept in the same address space.

Size Microkernel are smaller in size. Monolithic kernel is larger than microkernel.

Execution Slow execution. Fast execution.

Extendible The microkernel is easily extendible. The monolithic kernel is hard to extend.

Security If a service crashes, it does effect on If a service crashes, the whole system crashes in

working of microkernel. monolithic kernel.

Code To write a microkernel, more code is To write a monolithic kernel, less code is required.

required.

Example QNX, Symbian, L4Linux, Singularity, K42, Linux, BSDs (FreeBSD, OpenBSD, NetBSD),

Mac OS X, Integrity, PikeOS, HURD, Microsoft Windows (95,98,Me), Solaris, OS-9, AIX,

Minix, and Coyotos. HP-UX, DOS, OpenVMS, XTS-400 etc.

2) What is an operating System? Explain its basic functions.


An operating system is an interface between a user of computer and computer hardware. It controls and
co-ordinates the use of hardware among the various application program for various application program
for various users.
Its basic function ;-
1. The purpose of an operating system is to provide an environment in which the user may execute
the program.
2. The primary goal of an OS is to make the computer system convient to use .
3. The secondary goal is to use the computer hardware in an efficient manner.
3) What are system calls? Explain different types of system calls with example.
System calls provides an interface between the process and the Operating System.There are 5 major types
of system calls :-
1. Process job control:- A running process can control execution of itself by invoking system calls or OS
calls.
2. File management: File management system calls are used to create a file in the OS. Open the file and
close .Read and write operation can be performed too. Also we can reposition the location of the file
3. Device management :- A process may need several resources to execute if the resources are available they
can be granted and control can be returned to the user process otherwise the process will have to wait
until the resource are available. This system calls are used for this operations.
4. Information maintenance :- This system calls exist simply for the purpose of transferring the information
between user program and OS. This system calls also return date,time,system data such as system number
of user, version of OS, etc.
5. Communication :- This is used to communicate between two process or job in the OS.

4) Explain following operating structures with example.


i) Layered :- Layered Structure is a type of system structure in which the different services of the operating
system are split into various layers, where each layer has a specific well-defined task to perform. It was created to
improve the pre-existing structures like the Monolithic structure ( UNIX ) and the Simple structure ( MS-DOS
). Example – The Windows NT operating system uses this layered approach as a part of it.
ii) Monolithic :- The monolithic operating system is a very simple operating system where the kernel directly
controls device management, memory management, file management, and process management. All of the
system’s resources are accessible to the kernel. Every part of the operating system is contained within the kernel
in monolithic systems. Monolithic architecture-based operating systems were initially introduced in the 1970s.
The monolithic kernel is another name for the monolithic operating system. This is an outdated operating
system that banks employ for menial jobs like time-sharing and batch processing. EXAMPLE:- Linux,
FreeBSD, OpenBSD, NetBSD, Microsoft Windows (95, 98, Me), Solaries, HP-UX, DOS, OpenVMS,
XTS-400, etc.
iii) Microkernel :- A microkernel is a type of operating system kernel that is designed to provide
only the most basic services required for an operating system to function, such as memory management
and process scheduling. Other services, such as device drivers and file systems, are implemented as
user-level processes that communicate with the microkernel via message passing. This design allows
the operating system to be more modular and flexible than traditional monolithic kernels, which
implement all operating system services in kernel space.The main advantage of a microkernel
architecture is that it provides a more secure and stable operating system. EXAMPLE :- QNX, Symbian,
L4L.inux, Singularity, K42, Mac OS X, Integrity, PikeOS, HURD, Minix, and Coyotos.
Module 2(CO2)

5)

 Round Robin Algorithm :

| P1 | P2 | P3 | P1 | P2 | P1 | P2 | P1 | P1 | P1 | P1 |
|----|----|----|----|----|----|----|----|----|----|----|
| 0 | 2 | 4 | 6 | 8 | 10 | 12 | 14 | 16 | 18 | 20 |

 Average Waiting Time = (0 + 0 + 0) / 3 = 0


 Average Turnaround Time = (10 + 5 + 2) / 3 = 17 / 3 ≈ 5.67
 FCFS

| P1 | P2 | P3 | P1 |
|----|----|----|----|
| 0 | 10 | 15 | 17 | 19 |
 Average Waiting Time = (0 + 9 + 13) / 3 = 22 / 3 ≈ 7.33
 Average Turnaround Time = (10 + 14 + 15) / 3 = 39 / 3 ≈ 13

 SJF
| P3 | P2 | P1 |
|----|----|----|
| 0 | 2 | 6 | 11 | 19 |
 Average Waiting Time = (0 + 3 + 9) / 3 = 12 / 3 = 4
 Average Turnaround Time = (2 + 8 + 19) / 3 = 29 / 3 ≈ 9.67

6) Draw and describe PCB.


 Each process is represented in OS by PCB also called Process Control Block or Task control block. A PCB
contains many pieces of information associated with a specific process as shown in the figure
1.) Pointer – Points to the next PCB in ready queue.
2.) Process state - Gives information about a state of the process.It may e new, ready, running, etc.
3.) Program counter – It indicates address of next instruction to be executed for process.
4.) Process ID/No – Its identification number given by system to the process.
5.) CPU resistors – The resistor varies depending upon the architecture.Along with program counter this
information must be stored so that when an interrupt occurs it allows the process to be continued
correctly.
6.) CPU scheduling information – This information includes priorities pointers to scheduling queues and any
other scheduling parameters.
7.) Memory management information – It consist of info such as value of base and limit register page table,
segment tables, depending on the memory management scheme used by OS.
8.) Accounting info – This info includes the amount of CPU time used.
9.) I/o status info – This info includes the list of I/O device allocated to the process.List of open files and so
on.
A PCB simply serves as the repository for any information that may vary from process to process.

7) Discuss various CPU scheduling criteria.


 CPU scheduling is a method process or task that the CPU will run at any given moment. CPU Scheduling has
several criteria. Some of them are mentioned below.
1.) CPU utilization
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible.
Theoretically, CPU utilization can range from 0 to 100 but in a real-time system, it varies from 40 to 90
percent depending on the load upon the system.
2.) Throughput
A measure of the work done by the CPU is the number of processes being executed and completed per
unit of time. This is called throughput.
3.) Turnaround Time
For a particular process, an important criterion is how long it takes to execute that process.Turn-around
time is the sum of times spent waiting to get into memory, waiting in the ready queue, executing in CPU,
and waiting for I/O. Given as
Turn Around Time = Completion Time - Arrival Time.
4.) Waiting Time
The time spent by a process waiting in the ready queue is called as waiting time and it is calculated as
Waiting Time = Turnaround Time - Burst Time.
5.) Response Time
the time taken from submission of the process of the request until the first response is produced. This
measure is called response time.
Response Time = CPU Allocation Time - Arrival Time
6.)Completion Time
The completion time is the time when the process stops executing,
6.) Priority
If the operating system assigns priorities to processes, the scheduling mechanism should favor the higher-
priority processes.
7.) Predictability
A given process always should run in about the same amount of time under a similar system load.
8) Explain Round Robin algorithm with suitable example.
This algorithm works on the principle of round-robin, where an equal share of an object is given to each
person in turns. Mostly used for multitasking, this is the oldest and simplest scheduling algorithm that
offers starvation-free execution. Each ready task has to run turn by turn in a cyclic queue for a limited
time period in round-robin (RR).

9) Explain the five state process state transition diagram.


1.) Running: It means a process that is currently being executed. Assuming that there is only a single
processor in the below execution process, so there will be at most one processor at a time that can be
running in the state.
2.) Ready: It means a process that is prepared to execute when given the opportunity by the OS.
3.) Blocked/Waiting: It means that a process cannot continue executing until some event occurs like for
example, the completion of an input-output operation.
4.) New: It means a new process that has been created but has not yet been admitted by the OS for its
execution. A new process is not loaded into the main memory, but its process control block (PCB) has
been created.
5.) Exit/Terminate: A process or job that has been released by the OS, either because it is completed or is
aborted for some issue.

10) Define Thread. Mention benefits of Multithreading.


A thread sometimes called a lightweight process is a basic unit of CPU utilization.It comprises of a thread
Id , a program counter, a register set and a stack. It shares with other threads its code section , data
section, and other os resources.
Benefits of threads –
1.) Responsiveness
2.) Resource sharing
3.) Economy
4.) Utilization of multiprocessor architecture.

11) Explain different types of threads in OS.


1.) User threads-

User threads are supported above the kernel level and are implemented by a thread library at a user level
.The library provides support from the kernel because the kernel is unaware of the user level
threads as well as all thread creation and scheduling are done in user space without the need of kernel
intervation. This threads are fast to create and manage.
2.) Kernel Threads – Kernel threads are directly supported by the os .the kernel performs thread creation and
manages in kernel space because thread management is done by the os , kernel threads are generally
slower to create and manage than user thread.

12) Compare preemptive and non-preemptive scheduling algorithm.


Parameter PREEMPTIVE SCHEDULING NON-PREEMPTIVE SCHEDULING

Once resources(CPU Cycle) are allocated


In this resources(CPU Cycle) are
to a process, the process holds it till it
Basic allocated to a process for a limited
completes its burst time or switches to
time.
waiting state.

Process can be interrupted in Process can not be interrupted until it


Interrupt
between. terminates itself or its time is up.

If a process having high priority


If a process with a long burst time is
frequently arrives in the ready
Starvation running CPU, then later coming process
queue, a low priority process may
with less CPU burst time may starve.
starve.

It has overheads of scheduling the


Overhead It does not have overheads.
processes.

Flexibility flexible rigid

Cost cost associated no cost associated

In preemptive scheduling, CPU


CPU Utilization It is low in non preemptive scheduling.
utilization is high.

Preemptive scheduling waiting time Non-preemptive scheduling waiting time


Waiting Time
is less. is high.

Preemptive scheduling response Non-preemptive scheduling response time


Response Time
time is less. is high.

Decisions are made by the process itself


Decisions are made by the
and the OS just follows the process’s
Decision making scheduler and are based on priority
instructions
and time slice allocation

The OS has less control over the


The OS has greater control over the
Process control scheduling of processes
scheduling of processes

Lower overhead since context switching is


Higher overhead due to frequent
Overhead less frequent
context switching

Examples of preemptive scheduling Examples of non-preemptive scheduling


Examples are Round Robin and Shortest are First Come First Serve and Shortest
Remaining Time First. Job First.
MODULE 3

11) What is the critical section problem? Mention three conditions that must be satisfied
by its solution?
 The critical section is a code segment where the shared variables can be accessed. An atomic
action is required in a critical section i.e. only one process can execute in its critical section at a time.
All the other processes have to wait to execute in their critical sections.
 Mutual Exclusion: Our solution must provide mutual exclusion. By Mutual Exclusion, we mean that if
one process is executing inside critical section then the other process must not enter in the critical section.
 Progress : Progress means that if one process doesn't need to execute into critical section then it should not
stop other processes to get into the critical section
 Bounded Waiting : We should be able to predict the waiting time for every process to get into the critical
section. The process must not be endlessly waiting for getting into the critical section.
 Architectural Neutrality : Our mechanism must be architectural natural. It means that if our solution is
working fine on one architecture then it should also run on the other ones as well.

12) What is a Deadlock? Explain the necessary conditions for a deadlock to take place.
 A Deadlock is a situation that involves the interaction of more than one resource and process with
each other. Deadlock can occur if four conditions are present simultaneously:
 Mutual Exclusion: At least one resource must be held in a non-sharable mode, meaning it cannot be
simultaneously used by multiple processes.
 Hold and Wait: A process must be holding at least one resource while waiting for another resource to be
released by another process.
 No Preemption: Resources cannot be forcibly taken away from a process. They must be released voluntarily.
 Circular Wait: A circular chain of two or more processes exists, where each process is waiting for a resource
held by another process in the chain.
13) What is deadlock avoidance? Explain the algorithm with an example.
Ans - Deadlock avoidance is another technique used in operating systems to deal with deadlocks. Unlike
deadlock prevention, which aims to eliminate the possibility of deadlocks, deadlock avoidance focuses on
dynamically detecting and avoiding situations that could lead to deadlocks.Bankers algorithm is used to solve
deadlock avoidance. Example :-

Suppose we have 3 processes(A, B, C) and 3 resource types(R1, R2, R3) each having 5 instances.

You might also like