Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 51

1: Draw the structure of operating system and explain.

An operating system is a construct that allows the user application programs to interact with the
system hardware. Since the operating system is such a complex structure, it should be created with
utmost care so it can be used and modified easily. An easy way to do this is to create the operating
system in parts. Each of these parts should be well defined with clear inputs, outputs and functions.
Simple Structure
There are many operating systems that have a rather simple structure. These started as small systems
and rapidly expanded much further than their scope. A common example of this is MS-DOS. It was
designed simply for a niche amount for people. There was no indication that it would become so
popular.
An image to illustrate the structure of MS-DOS is as follows −

It is better that operating systems have a modular structure, unlike MS-DOS. That would lead to
greater control over the computer system and its various applications. The modular structure would
also allow the programmers to hide information as required and implement internal routines as they
see fit without changing the outer specifications.
Layered Structure
One way to achieve modularity in the operating system is the layered approach. In this, the bottom
layer is the hardware and the topmost layer is the user interface.
An image demonstrating the layered approach is as follows −
As seen from the image, each upper layer is built on the bottom layer. All the layers hide some
structures, operations etc from their upper layers.
One problem with the layered structure is that each layer needs to be carefully defined. This is
necessary because the upper layers can only use the functionalities of the layers below them.

2: Explain in detail about semaphores.


Semaphores are compound data types with two fields one is a Non-negative integer S.V and the
second is Set of processes in a queue S.L. It is used to solve critical section problems, and by using
two atomic operations, it will be solved. In this, wait and signal that is used for process
synchronization.

States of the process :


Let’s go through the stages of the process that comes in its lifecycle. This will help in understanding
semaphore.

Running –
It states that the Process in execution.
Ready –
It states that the process wants to run.
Idle –
The process runs when no processes are running
Blocked –
The processes not ready not a candidate for a running process. It can be awakened by some external
actions.
Inactive –
The initial state of the process. The process is activated at some point and becomes ready.
Complete –
When a process executes its final statement.

Types of Semaphores :
Here we will discuss the types of Semaphores as follows.

Type-1 :
General Semaphore :
A semaphore whose integer component can take arbitrary non-negative values of S.L. these are called
General Semaphore. They are kind of weak semaphore.

Type-2 :
Binary Semaphore :
A semaphore whose integer component S.L. takes only the values 0 and 1 is called a binary
semaphore. This is also known as “mutex” which stands for mutual exclusion.

Type-3 :
Strong Semaphore :
In Strong semaphore, S.L. remains unchanged as like weak semaphores whereas S.V. is replaced by
the queue. As because of removal of arbitrary process in weak semaphore it may lead to starvation
whereas in this case, it remains free from starvation.

Type-4 :
Busy- Wait Semaphore :
It does not have a component S.L. and Semaphore S is identified only by S.V.Busy-Wait Semaphore
are appropriate in a multi-processor system where the waiting process has its own processor and is not
wasting CPU time that could be used for computation.

3: Discuss in detail about process synchronization.

4: Discuss in detail about system calls and its types


Definition :

a. A system call is the programmatic way in which a computer program requests a service from
the kernel of the operating system it is executed on. A system call is a way for programs to
interact with the operating system.

b. A computer program makes a system call when it makes a request to the operating system’s
kernel. System call provides the services of the operating system to the user programs via
Application Program Interface(API).

c. It provides an interface between a process and operating system to allow user-level processes
to request services of the operating system. System calls are the only entry points into the
kernel system. All programs needing resources must use system calls.

Here are the five types of System Calls in OS:

 Process Control
 File Management
 Device Management
 Information Maintenance
 Communications

Types of System calls in OS


Process Control
This system calls perform the task of process creation, process termination, etc.

Functions:

 End and Abort


 Load and Execute
 Create Process and Terminate Process
 Wait and Signal Event
 Allocate and free memory

File Management
File management system calls handle file manipulation jobs like creating a file, reading, and writing,
etc.

Functions:

 Create a file
 Delete file
 Open and close file
 Read, write, and reposition
 Get and set file attributes

Device Management
Device management does the job of device manipulation like reading from device buffers, writing
into device buffers, etc.

Functions:

 Request and release device


 Logically attach/ detach devices
 Get and Set device attributes

Information Maintenance
It handles information and its transfer between the OS and the user program.

Functions:

 Get or set time and date


 Get process and device attributes

Communication:
These types of system calls are specially used for interprocess communications.

Functions:

 Create, delete communications connections


 Send, receive message
 Help OS to transfer status information
Attach or detach remote devices
5: Explain in detail about different types of operating system

Types of Operating System (OS)


Following are the popular types of OS (Operating System):

 Batch Operating System


 Multitasking/Time Sharing OS
 Multiprocessing OS
 Real Time OS
 Distributed OS
 Network OS
 Mobile OS

Batch Operating System


Some computer processes are very lengthy and time-consuming. To speed the same process, a job
with a similar type of needs are batched together and run as a group.

The user of a batch operating system never directly interacts with the computer. In this type of OS,
every user prepares his or her job on an offline device like a punch card and submit it to the computer
operator.

Multi-Tasking/Time-sharing Operating systems


Time-sharing operating system enables people located at a different terminal(shell) to use a single
computer system at the same time. The processor time (CPU) which is shared among multiple users is
termed as time sharing.
Real time OS
A real time operating system time interval to process and respond to inputs is very small. Examples:
Military Software Systems, Space Software Systems are the Real time OS example.

Distributed Operating System


Distributed systems use many processors located in different machines to provide very fast
computation to its users.
Network Operating System
Network Operating System runs on a server. It provides the capability to serve to manage data, user,
groups, security, application, and other networking functions.

Mobile OS
Mobile operating systems are those OS which is especially that are designed to power smartphones,
tablets, and wearables devices.

Some most famous mobile operating systems are Android and iOS, but others include BlackBerry,
Web, and watchOS.

6: Explain in detail about process scheduling.


Process scheduling is an important part of multiprogramming operating systems. It is the process
of removing the running task from the processor and selecting another task for processing. It
schedules a process into different states like ready, waiting, and running.

Process Scheduling Queues

There are multiple states a process has to go through during execution. The OS maintains a separate
queue for each state along with the process control blocks (PCB) of all processes. The PCB moves to
a new state queue, after being unlinked from its current queue, when the state of a process changes.

These process scheduling queues are:

1. Job queue: Makes sure that processes stay in the system.


2. Ready queue: This stores a set of all processes in main memory, ready and waiting for execution.
The ready queue stores any new process.
3. Device queue: This queue consists of the processes blocked due to the unavailability of an I/O
device.
There are different policies that the OS uses to manage each queue and the OS scheduler decides how
to move processes between the ready and run queue which allows only one entry per processor core
on the system.

Objectives of Process Scheduling in OS

Following are the objectives of process scheduling:

1. It maximizes the number of interactive users within acceptable response times.


2. It achieves a balance between response and utilization.
3. It makes sure that there is no postponement for an unknown time and enforces priorities.
4. It gives reference to the processes holding the key resources.

7: Explain the various states of process with diagram.


New
This is the state when the process has just been created. It is the initial state in the process life cycle.
Ready
In the ready state, the process is waiting to be assigned the processor by the short term scheduler, so it
can run. This state is immediately after the new state for the process.
Ready Suspended
The processes in ready suspended state are in secondary memory. They were initially in the ready
state in main memory but lack of memory forced them to be suspended and gets placed in the
secondary memory.
Running
The process is said to be in running state when the process instructions are being executed by the
processor. This is done once the process is assigned to the processor using the short-term scheduler.
Blocked
The process is in blocked state if it is waiting for some event to occur. This event may be I/O as the
I/O events are executed in the main memory and don't require the processor. After the event is
complete, the process again goes to ready state.
Blocked Suspended
This is similar to ready suspended. The processes in blocked suspended state are in secondary
memory. They were initially in the blocked state in main memory waiting for some event but lack of
memory forced them to be suspended and gets placed in the secondary memory. A process may go
from blocked suspended to ready suspended if its work is done.
Terminated
The process is terminated once it finishes its execution. In the terminated state, the process is removed
from main memory and its process control block is also deleted.
8: Discuss the various operations of operating system in detail.
An operating system is a program on which application programs are executed and acts as a
communication bridge (interface) between the user and the computer hardware.

The main task an operating system carries out is the allocation of resources and services, such as the
allocation of memory, devices, processors, and information. The operating system also includes
programs to manage these resources, such as a traffic controller, a scheduler, memory management
module, I/O programs, and a file system.

Important functions of an operating System:

Security –
The operating system uses password protection to protect user data and similar other techniques. it
also prevents unauthorized access to programs and user data.

Control over system performance –


Monitors overall system health to help improve performance. records the response time between
service requests and system response to having a complete view of the system health. This can help
improve performance by providing important information needed to troubleshoot problems.

Job accounting –
Operating system Keeps track of time and resources used by various tasks and users, this information
can be used to track resource usage for a particular user or group of users.

Error detecting aids –


The operating system constantly monitors the system to detect errors and avoid the malfunctioning of
a computer system.

Coordination between other software and users –


Operating systems also coordinate and assign interpreters, compilers, assemblers, and other software
to the various users of the computer systems.

Memory Management –
The operating system manages the Primary Memory or Main Memory. Main memory is made up of a
large array of bytes or words where each byte or word is assigned a certain address. Main memory is
fast storage and it can be accessed directly by the CPU. For a program to be executed, it should be
first loaded in the main memory. An Operating System performs the following activities for memory
management:
It keeps track of primary memory, i.e., which bytes of memory are used by which user program. The
memory addresses that have already been allocated and the memory addresses of the memory that has
not yet been used. In multiprogramming, the OS decides the order in which processes are granted
access to memory, and for how long. It Allocates the memory to a process when the process requests
it and deallocates the memory when the process has terminated or is performing an I/O operation.

Processor Management –
In a multi-programming environment, the OS decides the order in which processes have access to the
processor, and how much processing time each process has. This function of OS is called process
scheduling. An Operating System performs the following activities for processor management.
Keeps track of the status of processes. The program which performs this task is known as a traffic
controller. Allocates the CPU that is a processor to a process. De-allocates processor when a process
is no more required.

Device Management –
An OS manages device communication via their respective drivers. It performs the following
activities for device management. Keeps track of all devices connected to the system. designates a
program responsible for every device known as the Input/Output controller. Decides which process
gets access to a certain device and for how long. Allocates devices in an effective and efficient way.
Deallocates devices when they are no longer required.

File Management –
A file system is organized into directories for efficient or easy navigation and usage. These directories
may contain other directories and other files. An Operating System carries out the following file
management activities. It keeps track of where information is stored, user access settings and status of
every file, and more… These facilities are collectively known as the file system.

9: Explain the concept of Contiguous Memory Allocation?


Contiguous memory allocation is basically a method in which a single contiguous section/part of
memory is allocated to a process or file needing it. Because of this all the available memory space
resides at the same place together, which means that the freely/unused available memory partitions are
not distributed in a random fashion here and there across the whole memory space.

This allocation can be done in two ways:


1. Fixed-size Partition Scheme

This technique is also known as Static partitioning. In this scheme, the system divides the memory
into fixed-size partitions. The partitions may or may not be the same size. Whenever any process
terminates then the partition becomes available for another process.

Example

Let's take an example of fixed size partitioning scheme, we will divide a memory size of 15 KB into
fixed-size partitions:

Advantages of Fixed-size Partition Scheme

• This scheme is simple and is easy to implement


• It supports multiprogramming as multiple processes can be stored
inside the main memory.
• Management is easy using this scheme

Disadvantages of Fixed-size Partition Scheme

1. Internal Fragmentation
2. Limitation on the size of the process
3. External Fragmentation
4. Degree of multiprogramming is less

2. Variable-size Partition Scheme


This scheme is also known as Dynamic partitioning. The size of the partition is not declared initially.
Whenever any process arrives, a partition of size equal to the size of the process is created and then
allocated to the process. Thus the size of each partition is equal to the size of the process.

As partition size varies according to the need of the process so in this partition scheme there is no
internal fragmentation.
Advantages of Variable-size Partition Scheme

1. No Internal Fragmentation
2. Degree of Multiprogramming is Dynamic
3. No Limitation on the Size of Process

Disadvantages of Variable-size Partition Scheme

1. External Fragmentation
2. Difficult Implementation

10: Difference between Demand Paging and Segmentation?


S.No. Demand Paging Segmentation

In demand paging, the pages are While in segmentation, segments can be of


1. of equal size. different size.

Page size is fixed in the demand Segment size may vary in segmentation as it
2. paging. grants dynamic increase of segments.

It does not allows sharing of the While segments can be shared in


3. pages. segmentation.
S.No. Demand Paging Segmentation

In demand paging, on demand In segmentation, during compilation


4. pages are loaded in the memory. segments are allocated to the program.

Page map table in demand paging Segment map table in segmentation


manages record of pages in demonstrates every segment address in the
5. memory. memory.

It provides large virtual memory It provides virtual memory and maximum


and have more efficient use of size of segment is defined by the size of
6. memory. memory.

11: Explain Paging in Detail with example?

In Operating Systems, Paging is a storage mechanism used to retrieve processes from the secondary
storage into the main memory in the form of pages.

The main idea behind the paging is to divide each process in the form of pages. The main memory
will also be divided in the form of frames.

One page of the process is to be stored in one of the frames of the memory. The pages can be stored at
the different locations of the memory but the priority is always to find the contiguous frames or holes.

Pages of the process are brought into the main memory only when they are required otherwise they
reside in the secondary storage.

Different operating system defines different frame sizes. The sizes of each frame must be equal.
Considering the fact that the pages are mapped to the frames in Paging, page size needs to be as same
as frame size.

Example

Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main memory will
be divided into the collection of 16 frames of 1 KB each.

There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is divided
into pages of 1 KB each so that one page can be stored in one frame.

Initially, all the frames are empty therefore pages of the processes will get stored in the contiguous
way.

Frames, pages and the mapping between the two is shown in the image below.
Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8 frames become
empty and therefore other pages can be loaded in that empty place. The process P5 of size 8 KB (8
pages) is waiting inside the ready queue.

Given the fact that, we have 8 non contiguous frames available in the memory and paging provides
the flexibility of storing the process at the different places. Therefore, we can load the pages of
process P5 in the place of P2 and P4.
12: Consider the set of 5 processes whose arrival time and burst time are given below-

Process Arrival Burst


Id time time
P1 3 4
P2 5 3
P3 0 2
P4 5 1
P5 4 3

If the CPU scheduling policy is FCFS, calculate the average waiting time and average turn
around time?

Burst
Process Id Arrival time
time
P1 3 4
P2 5 3
P3 0 2
P4 5 1
P5 4 3

If the CPU scheduling policy is FCFS, calculate the average waiting time and average turn around
time?

ANS .
Gantt Chart-
 

 
Here, black box represents the idle time of CPU.
 
Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time
 
 

Process Id Exit time Turn Around time Waiting time

P1 7 7–3=4 4–4=0

P2 13 13 – 5 = 8 8–3=5

P3 2 2–0=2 2–2=0

P4 14 14 – 5 = 9 9–1=8
P5 10 10 – 4 = 6 6–3=3

 
Now,
 Average Turn Around time = (4 + 8 + 2 + 9 + 6) / 5 = 29 / 5 = 5.8 unit
 Average waiting time = (0 + 5 + 0 + 8 + 3) / 5 = 16 / 5 = 3.2 unit

13: Consider the set of 5 processes whose arrival time and burst time are given below-

Process Arrival Burst


Id time time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
If the CPU scheduling policy is SJF non-preemptive, calculate the average waiting time and
average turn around time

Burst
Process Id Arrival time
time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3

If the CPU scheduling policy is SJF non-preemptive, calculate the average waiting time and average
turn around time.

Ans.
Gantt Chart-
 

 
Now, we know-
 Turn Around time = Exit time – Arrival time
 Waiting time = Turn Around time – Burst time
 
 

Process Id Exit time Turn Around time Waiting time

P1 7 7–3=4 4–1=3

P2 16 16 – 1 = 15 15 – 4 = 11

P3 9 9–4=5 5–2=3

P4 6 6–0=6 6–6=0

P5 12 12 – 2 = 10 10 – 3 = 7

 
Now,
 Average Turn Around time = (4 + 15 + 5 + 6 + 10) / 5 = 40 / 5 = 8 unit
 Average waiting time = (3 + 11 + 3 + 0 + 7) / 5 = 24 / 5 = 4.8 unit

14: Consider the following requests:


25 K , 50 K , 100 K , 75 K. 
Determine the algorithm which can optimally satisfy this requirement.

1. First Fit algorithm


2. Best Fit Algorithm
3. Worst Fit.

25 K , 50 K , 100 K , 75 K. 
Determine the algorithm which can optimally satisfy this requirement.

1. First Fit algorithm


2. Best Fit Algorithm
3. Worst Fit.

Using First Fit algorithm

Let's see, how first fit algorithm works on this problem.

1. 25 K requirement

The algorithm scans the list until it gets first hole which should be big enough to satisfy the request of
25 K. it gets the space in the second partition which is free hence it allocates 25 K out of 75 K to the
process and the remaining 50 K is produced as hole.

2. 50 K requirement

The 50 K requirement can be fulfilled by allocating the third partition which is 50 K in size to the
process. No free space is produced as free space.

3. 100 K requirement

100 K requirement can be fulfilled by using the fifth partition of 175 K size. Out of 175 K, 100 K will
be allocated and remaining 75 K will be there as a hole.
4. 75 K requirement

Since we are having a 75 K free partition hence we can allocate that much space to the process which
is demanding just 75 K space.

Using first fit algorithm, we have fulfilled the entire request optimally and no useless space is
remaining.

Let's see, How Best Fit algorithm performs for the problem.

Using Best Fit Algorithm

1. 25 K requirement

To allocate 25 K space using best fit approach, need to scan the whole list and then we find that a 75
K partition is free and the smallest among all, which can accommodate the need of the process.

Therefore 25 K out of those 75 K free partition is allocated to the process and the remaining 5o K is
produced as a hole.

2. 50 K requirement

To satisfy this need, we will again scan the whole list and then find the 50 K space is free which the
exact match of the need is. Therefore, it will be allocated for the process.
3. 100 K requirement

100 K need is close enough to the 175 K space. The algorithm scans the whole list and then allocates
100 K out of 175 K from the 5th free partition.

4. 75 K requirement

75 K requirement will get the space of 75 K from the 6th free partition but the algorithm will scan the
whole list in the process of taking this decision.

Therefore, if you ask me that which algorithm performs in more optimal way then it will
be First Fit algorithm for sure.

Therefore, the answer in this case is A.

15: Considering a system with five processes P0 through P4 and three resources of type A, B, C.
Resource type A has 10 instances, B has 5 instances and type C has 7 instances. Suppose at time
t0 following snapshot of the system has been taken:

1. What will be the content of the Need matrix?


2. Is the system in a safe state? If Yes, then what is the safe sequence?
3. What will happen if process P1 requests one additional instance of resource type
16:
Max Allocation Available
Process
  A, B, C, D A, B, C, D A, B, C, D
P0 6 0 1 24 0 0 13 2 1 1
P1 2   7  5   0 1   1  0   0  
P2 2  3   5   6 1   2   5   4  
P3 1  6   5   3 0   6   3   3  
P4 1  6   5  6 0   2   1  2

Using Banker’s algorithm, answer the following questions:


1)  What are the contents of need matrix?
2)  Find if the system is in safe state? If it is, find the safe sequence?
17: State critical section problem? Discuss the solutions to solve the critical section problem.
18: List the various services provided by operating systems and explain.
Here is a list of common services offered by an almost all operating systems:

 User Interface
 Program Execution
 File system manipulation
 Input / Output Operations
 Communication
 Resource Allocation
 Error Detection
 Accounting
 Security and protection

This chapter will give a brief description of what services an operating system usually provide to
users and those programs that are and will be running within it.

User Interface of Operating System


Usually Operating system comes in three forms or types. Depending on the interface their types have
been further subdivided. These are:

 Command line interface


 Batch based interface
 Graphical User Interface

Let's get to know in brief about each of them.

The command line interface (CLI) usually deals with using text commands and a technique for
entering those commands. The batch interface (BI): commands and directives are used to manage
those commands that are entered into files and those files get executed. Another type is the graphical
user interface (GUI): which is a window system with a pointing device (like mouse or trackball) to
point to the I/O, choose from menus driven interface and to make choices viewing from a number of
lists and a keyboard to entry the texts.

Program Execution in Operating System


The operating system must have the capability to load a program into memory and execute that
program. Furthermore, the program must be able to end its execution, either normally or abnormally /
forcefully.

File System Manipulation in Operating System


Programs need has to be read and then write them as files and directories. File handling portion of
operating system also allows users to create and delete files by specific name along with extension,
search for a given file and / or list file information. Some programs comprise of permissions
management for allowing or denying access to files or directories based on file ownership.

I/O operations in Operating System


A program which is currently executing may require I/O, which may involve file or other I/O device.
For efficiency and protection, users cannot directly govern the I/O devices. So, the OS provide a
means to do I/O Input / Output operation which means read or write operation with any file.

Communication System of Operating System


Process needs to swap over information with other process. Processes executing on same computer
system or on different computer systems can communicate using operating system support.
Communication between two processes can be done using shared memory or via message passing.

Resource Allocation of Operating System


When multiple jobs running concurrently,  resources must need to be allocated to each of them.
Resources can be CPU cycles, main memory storage, file storage and I/O devices. CPU scheduling
routines are used here to establish how best the CPU can be used.
Error Detection
Errors may occur within CPU, memory hardware, I/O devices and in the user program. For each type
of error, the OS takes adequate action for ensuring correct and consistent computing.

Accounting
This service of the operating system keeps track of which users are using how much and what kinds
of computer resources have been used for accounting or simply to accumulate usage statistics.

Protection and Security


Protection includes in ensuring all access to system resources in a controlled manner. For making a
system secure, the user needs to authenticate him or her to the system before using (usually via login
ID and password).

19: Consider the following page reference string: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6.


Identify the no.of page faults that would occur for the following replacement algorithms,
assuming one, two, three, four, five, six, or seven frames? Remember all frames are initially
empty, so your first unique pages will all cost one fault each.
 a. LRU replacement
 b. FIFO replacement 
 c. Optimal replacement
20: Write about the various CPU scheduling algorithms in detail.
CPU Scheduling is a process of determining which process will own CPU for execution while
another process is on hold. The main task of CPU scheduling is to make sure that whenever the CPU
remains idle, the OS at least select one of the processes available in the ready queue for execution.
The selection process will be carried out by the CPU scheduler. It selects one of the processes in
memory that are ready for execution.

Types of CPU Scheduling


Here are two kinds of Scheduling methods:
Preemptive Scheduling
In Preemptive Scheduling, the tasks are mostly assigned with their priorities. Sometimes it is
important to run a task with a higher priority before another lower priority task, even if the lower
priority task is still running. The lower priority task holds for some time and resumes when the higher
priority task finishes its execution.

Non-Preemptive Scheduling
In this type of scheduling method, the CPU has been allocated to a specific process. The process that
keeps the CPU busy will release the CPU either by switching context or terminating. It is the only
method that can be used for various hardware platforms. That’s because it doesn’t need special
hardware (for example, a timer) like preemptive scheduling.

First Come First Serve


First Come First Serve is the full form of FCFS. It is the easiest and most simple CPU scheduling
algorithm. In this type of algorithm, the process which requests the CPU gets the CPU allocation first.
This scheduling method can be managed with a FIFO queue.

As the process enters the ready queue, its PCB (Process Control Block) is linked with the tail of the
queue. So, when CPU becomes free, it should be assigned to the process at the beginning of the
queue.

Characteristics of FCFS method:

 It offers non-preemptive and pre-emptive scheduling algorithm.


 Jobs are always executed on a first-come, first-serve basis
 It is easy to implement and use.
 However, this method is poor in performance, and the general wait time is quite high.

Shortest Remaining Time


The full form of SRT is Shortest remaining time. It is also known as SJF preemptive scheduling. In
this method, the process will be allocated to the task, which is closest to its completion. This method
prevents a newer ready state process from holding the completion of an older process.

Characteristics of SRT scheduling method:


 This method is mostly applied in batch environments where short jobs are required to be
given preference.
 This is not an ideal method to implement it in a shared system where the required CPU time is
unknown.
 Associate with each process as the length of its next CPU burst. So that operating system uses
these lengths, which helps to schedule the process with the shortest possible time.

Priority Based Scheduling


Priority scheduling is a method of scheduling processes based on priority. In this method, the
scheduler selects the tasks to work as per the priority.

Priority scheduling also helps OS to involve priority assignments. The processes with higher priority
should be carried out first, whereas jobs with equal priorities are carried out on a round-robin or FCFS
basis. Priority can be decided based on memory requirements, time requirements, etc.

Round-Robin Scheduling
Round robin is the oldest, simplest scheduling algorithm. The name of this algorithm comes from the
round-robin principle, where each person gets an equal share of something in turn. It is mostly used
for scheduling algorithms in multitasking. This algorithm method helps for starvation free execution
of processes.

Characteristics of Round-Robin Scheduling

 Round robin is a hybrid model which is clock-driven


 Time slice should be minimum, which is assigned for a specific task to be processed.
However, it may vary for different processes.
 It is a real time system which responds to the event within a specific time limit.

Shortest Job First


SJF is a full form of (Shortest job first) is a scheduling algorithm in which the process with the
shortest execution time should be selected for execution next. This scheduling method can be
preemptive or non-preemptive. It significantly reduces the average waiting time for other processes
awaiting execution.

Characteristics of SJF Scheduling

 It is associated with each job as a unit of time to complete.


 In this method, when the CPU is available, the next process or job with the shortest
completion time will be executed first.
 It is Implemented with non-preemptive policy.
 This algorithm method is useful for batch-type processing, where waiting for jobs to complete
is not critical.
 It improves job output by offering shorter jobs, which should be executed first, which mostly
have a shorter turnaround time.
21: Discuss the latest advances in the operating system for various devices.
Kya mtlb pura sem over hogya aur ptani chla OS me advancements
22: Find the need Matrix using Banker’s algorithms find the safe state sequence.
Question aadha hai menu ni pta
23: Examine the following with suitable example and diagram.
1. How to find the advanced OS research article  
2. Write difference between Windows and Linux.

24: Summarize a process? Classify different process states in synchronization.


A process is a program in execution which then forms the basis of all computation. The process is not
as same as program code but a lot more than it. A process is an 'active' entity as opposed to the
program which is considered to be a 'passive' entity. Attributes held by the process include hardware
state, memory, CPU, etc.

The different Process States

Processes in the operating system can be in any of the following states:

 NEW- The process is being created.

 READY- The process is waiting to be assigned to a processor.

 RUNNING- Instructions are being executed.

 WAITING- The process is waiting for some event to occur(such as an I/O completion or
reception of a signal).

 TERMINATED- The process has finished execution.


25: Show the structure of OS and also describe the layered structure of OS.
Operating system can be implemented with the help of various structures. The structure of the OS
depends mainly on how the various common components of the operating system are interconnected
and melded into the kernel. Depending on this we have following structures of the operating system:
Simple structure:
Such operating systems do not have well defined structure and are small, simple and limited systems.
The interfaces and levels of functionality are not well separated. MS-DOS is an example of such
operating system. In MS-DOS application programs are able to access the basic I/O routines. These
types of operating system cause the entire system to crash if one of the user programs fails.
Micro-kernel:
This structure designs the operating system by removing all non-essential components from the kernel
and implementing them as system and user programs. This result in a smaller kernel called the micro-
kernel.
Advantages of this structure are that all new services need to be added to user space and does not
require the kernel to be modified. Thus it is more secure and reliable as if a service fails then rest of
the operating system remains untouched. Mac OS is an example of this type of OS.

Layered structure:
An OS can be broken into pieces and retain much more control on system. In this structure the OS is
broken into number of layers (levels). The bottom layer (layer 0) is the hardware and the topmost
layer (layer N) is the user interface. These layers are so designed that each layer uses the functions of
the lower level layers only. This simplifies the debugging process as if lower level layers are
debugged and an error occurs during debugging then the error must be on that layer only as the lower
level layers have already been debugged.
The main disadvantage of this structure is that at each layer, the data needs to be modified and passed
on which adds overhead to the system. Moreover careful planning of the layers is necessary as a layer
can use only lower level layers. UNIX is an example of this structure.
Advantages of Layered structure:

 Layering makes it easier to enhance the operating system as implementation of a layer can be
changed easily without affecting the other layers.
 It is very easy to perform debugging and system verification.
Disadvantages of Layered structure:

 In this structure the application performance is degraded as compared to simple structure.


 It requires careful planning for designing the layers as higher layers use the functionalities of
only the lower layers.
26: List the various objectives and functions of Operating Systems?
An operating system is a program on which application programs are executed and acts as a
communication bridge (interface) between the user and the computer hardware.

The main task an operating system carries out is the allocation of resources and services, such as the
allocation of memory, devices, processors, and information. The operating system also includes
programs to manage these resources, such as a traffic controller, a scheduler, memory management
module, I/O programs, and a file system.

Important functions of an operating System:

Security –
The operating system uses password protection to protect user data and similar other techniques. it
also prevents unauthorized access to programs and user data.

Control over system performance –


Monitors overall system health to help improve performance. records the response time between
service requests and system response to having a complete view of the system health. This can help
improve performance by providing important information needed to troubleshoot problems.

Job accounting –
Operating system Keeps track of time and resources used by various tasks and users, this information
can be used to track resource usage for a particular user or group of users.
Error detecting aids –
The operating system constantly monitors the system to detect errors and avoid the malfunctioning of
a computer system.

Coordination between other software and users –


Operating systems also coordinate and assign interpreters, compilers, assemblers, and other software
to the various users of the computer systems.

Memory Management –
The operating system manages the Primary Memory or Main Memory. Main memory is made up of a
large array of bytes or words where each byte or word is assigned a certain address. Main memory is
fast storage and it can be accessed directly by the CPU. For a program to be executed, it should be
first loaded in the main memory. An Operating System performs the following activities for memory
management:
It keeps track of primary memory, i.e., which bytes of memory are used by which user program. The
memory addresses that have already been allocated and the memory addresses of the memory that has
not yet been used. In multiprogramming, the OS decides the order in which processes are granted
access to memory, and for how long. It Allocates the memory to a process when the process requests
it and deallocates the memory when the process has terminated or is performing an I/O operation.

Processor Management –
In a multi-programming environment, the OS decides the order in which processes have access to the
processor, and how much processing time each process has. This function of OS is called process
scheduling. An Operating System performs the following activities for processor management.
Keeps track of the status of processes. The program which performs this task is known as a traffic
controller. Allocates the CPU that is a processor to a process. De-allocates processor when a process
is no more required.

Device Management –
An OS manages device communication via their respective drivers. It performs the following
activities for device management. Keeps track of all devices connected to the system. designates a
program responsible for every device known as the Input/Output controller. Decides which process
gets access to a certain device and for how long. Allocates devices in an effective and efficient way.
Deallocates devices when they are no longer required.

File Management –
A file system is organized into directories for efficient or easy navigation and usage. These directories
may contain other directories and other files. An Operating System carries out the following file
management activities. It keeps track of where information is stored, user access settings and status of
every file, and more… These facilities are collectively known as the file system.

27: Explain Different types of system calls in detail


Here are the five types of System Calls in OS:

 Process Control
 File Management
 Device Management
 Information Maintenance
 Communications

Types of System calls in OS


Process Control
This system calls perform the task of process creation, process termination, etc.

Functions:

 End and Abort


 Load and Execute
 Create Process and Terminate Process
 Wait and Signal Event
 Allocate and free memory

File Management
File management system calls handle file manipulation jobs like creating a file, reading, and writing,
etc.

Functions:

 Create a file
 Delete file
 Open and close file
 Read, write, and reposition
 Get and set file attributes

Device Management
Device management does the job of device manipulation like reading from device buffers, writing
into device buffers, etc.

Functions:

 Request and release device


 Logically attach/ detach devices
 Get and Set device attributes

Information Maintenance
It handles information and its transfer between the OS and the user program.

Functions:

 Get or set time and date


 Get process and device attributes

Communication:
These types of system calls are specially used for interprocess communications.

Functions:

 Create, delete communications connections


 Send, receive message
 Help OS to transfer status information
 Attach or detach remote devices

28: Define Batch processing operating system with example.


To avoid the problems of early systems the batch processing systems were introduced. The problem
of early systems was more setup time. So the problem of more set up time was reduced by processing
the jobs in batches, known as batch processing system. In this approach similar jobs were submitted to
the CPU for processing and were run together.

The main function of a batch processing system is to automatically keep executing the jobs in a batch.
This is the important task of a batch processing system i.e. performed by the ‘Batch Monitor’ resided
in the low end of main memory.
Types of Batch Operating System

 There are mainly two types of the batch operating system. These are as follows:
 Simple Batched System
 Multi-programmed batched system
Simple Batched System

 Use of high-level languages, magnetic tapes.


 Jobs are batched together by type of languages.
 An operator was hired to perform the repetitive tasks of loading jobs, starting the computer,
and collecting the output (Operator-driven Shop).
 It was not feasible for users to inspect memory or patch programs directly.

Operation of Simple Batch Systems :

 The user submits a job (written on cards or tape) to a computer operator.


 The computer operator place a batch of several jobs on an input device.
 A special program, the monitor, manages the execution of each program in the batch.
 Monitor utilities are loaded when needed.
 Resident monitor is always in main memory and available for execution.

Multi-programmed batched system

 In this the operating system picks up and begins to execute one of the jobs from memory.
 Once this job needs an I/O operation operating system switches to another job (CPU and OS
always busy).
 Jobs in the memory are always less than the number of jobs on disk(Job Pool).
 If several jobs are ready to run at the same time, then the system chooses which one to run
through the process of CPU Scheduling.
 In Non-multiprogrammed system, there are moments when CPU sits idle and does not do any
work.
 In Multiprogramming system, CPU will never be idle and keeps on processing.
Time Sharing Systems are very similar to Multiprogramming batch systems. In fact time sharing
systems are an extension of multiprogramming systems.
In Time sharing systems the prime focus is on minimizing the response time, while in
multiprogramming the prime focus is to maximize the CPU usage.
Example:
Transactions
A bank that processes transactions such as international money transfers after-hours. After-hours
batch processing was once extremely common in the banking industry. Many firms are slowly
shifting towards more modern techniques such as workload automation that allows asynchronous
processing to be efficiently run and managed on cloud infrastructure.
Reporting
A manufacturer produces a daily operational report for a production line that is run in a batch window
and delivered to managers in the early morning.
Billing
A telecom company runs a monthly batch job to process call data records that include the details of
millions of phone calls to calculate charges.
29: Explain file management concept, also define the concept of caching and buffering.

A file is collection of specific information stored in the memory of computer system. File


management is defined as the process of manipulating files in computer system, it management
includes the process of creating, modifying and deleting the files.

The following are some of the tasks performed by file management of operating system of any
computer system:

1. It helps to create new files in computer system and placing them at the specific locations.
2. It helps in easily and quickly locating these files in computer system.
3. It makes the process of sharing of the files among different users very easy and user friendly.
4. It helps to stores the files in separate folders known as directories. These directories help
users to search file quickly or to manage the files according to their types or uses.
5. It helps the user to modify the data of files or to modify the name of the file in the directories.

Caching:

Cache is a type of memory that is used to increase the speed of data access. Normally, the data
required for any process resides in the main memory. However, it is transferred to the cache memory
temporarily if it is used frequently enough. The process of storing and accessing data from a cache is
known as caching.

Advantages of Cache Memory


Some of the advantages of cache memory are as follows −

 Cache memory is faster than main memory as it is located on the processor chip itself. Its
speed is comparable to the processor registers and so frequently required data is stored in the
cache memory.
 The memory access time is considerably less for cache memory as it is quite fast. This leads
to faster execution of any process.
 The cache memory can store data temporarily as long as it is frequently required. After the
use of any data has ended, it can be removed from the cache and replaced by new data from
the main memory.

Buffering:
Buffering is the act of temporarily storing data in a buffer.
A buffer is a memory area that stores data being transferred between two devices or between a device
and an application.
Uses of I/O Buffering :

 Buffering is done to deal effectively with a speed mismatch between the producer and
consumer of the data stream.
 A buffer is produced in main memory to heap up the bytes received from modem.
 After receiving the data in the buffer, the data get transferred to disk from buffer in a single
operation.
 This process of data transfer is not instantaneous, therefore the modem needs another buffer
in order to store additional incoming data.
30: Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124,
65, 67.
The FCFS, SSTF, SCAN, C-SCAN, LOOK, C-LOOK scheduling algorithm is used. The head is
initially at cylinder number 53. The cylinders are numbered from 0 to 199. What is the total
head movement (in number of cylinders) incurred while servicing these requests?
Na na na bohot payara question hai isko ni krna hai
31: Estimate about advantages and disadvantages of paging. Why paging required with suitable
example? How do you relate the Paging and Segmentation?
Advantages of Paging

 The paging technique is easy to implement.


 The paging technique makes efficient utilization of memory.
 The paging technique supports time-sharing system.
 The paging technique supports non-contiguous memory allocation
Disadvantages of Paging

 Paging may encounter a problem called page break.


 When the number of pages in virtual memory is quite large, maintaining page table become
hectic.
Differences Between Paging and Segmentation

1. The basic difference between paging and segmentation is that a page is always of fixed
block size whereas, a segment is of variable size.
2. Paging may lead to internal fragmentation as the page is of fixed block size, but it may
happen that the process does not acquire the entire block size which will generate the
internal fragment in memory. The segmentation may lead to external fragmentation as
the memory is filled with the variable sized blocks.
3. In paging the user only provides a single integer as the address which is divided by the
hardware into a page number and Offset. On the other hands, in segmentation the user
specifies the address in two quantities i.e. segment number and offset.
4. The size of the page is decided or specified .
5.
6.
7.
8. by the hardware. On the other hands, the size of the segment is specified by the user.
9. In paging, the page table maps the logical address to the physical address, and it
contains base address of each page stored in the frames of physical memory space.
However, in segmentation, the segment table maps the logical address to the physical
address, and it contains segment number and offset (segment limit).

Need for Paging


Lets consider a process P1 of size 2 MB and the main memory which is divided into three partitions.
Out of the three partitions, two partitions are holes of size 1 MB each.

P1 needs 2 MB space in the main memory to be loaded. We have two holes of 1 MB each but they are
not contiguous.

Although, there is 2 MB space available in the main memory in the form of those holes but that
remains useless until it become contiguous. This is a serious problem to address.

We need to have some kind of mechanism which can store one process at different locations of the
memory.
The Idea behind paging is to divide the process in pages so that, we can store them in the memory at
different holes.

31: Consider a main memory with five page frames and the following sequence of page
references: 3, 8, 2, 3, 9, 1, 6, 3, 8, 9, 3, 6, 2, 1, 3. solve this reference string with respect to page
replacement policies First-In-First-out (FIFO) and Least Recently Used (LRU)?
32: Consider the following five processes, with the length of the CPU burst time given in
milliseconds.

Process Burst time P1 10 P2 29 P3 3 P4 7 P5 12

Consider the First come First serve (FCFS), Non Preemptive Shortest Job First(SJF), Round
Robin(RR) (quantum=10ms) scheduling algorithms. Illustrate the scheduling using Gantt chart

You might also like