Os 2 Marks Qus

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 49

Operating System

Unit-I

Operating System and Process Management

2 Marks:

1. Define Operating System.

An operating system is a software component of a computer system


that is responsible for the management of various activities of the
computer resources.
It hosts the several applications that run on a computer and handles
the operations of computer hardware.

2. Write the goals of Operating System.

The primary goal of operating system is convenience for the user.


It should be easy to use an application.

3. List out the types of Operating System.

Real-time operating system


Multi-user and Single-user operating system
Multi-tasking and Single-tasking operating system
Distributed operating system
Embedded system

4. What is referred to as 'Layered approach'?

In Layered approach the system is easier to debug and modify,


because changes affect only limited portions of the code, and
programmer does not have to know the details of the other layers.
Two layers (levels)
1. Bottom layer (layer 0)-hardware
2. Highest (layer N) - user interface.
5. Define Error Detection.

An error is one part of the system may cause malfunctioning of the


complete system. To avoid such a situation the operating system
constantly monitors the system for detecting the errors.

6. Define Process.

A process or task is a portion of a program in some stage of


execution.
The process priority is used to determine how often the process
receives processor time.

7. Name the Process States.

New
Running
Ready
Blocked
Waiting

8. What is meant by long-term and short-term schedulers?

The long-term scheduler (or job scheduler) selects processes


from the pool and loads them into memory for execution.
The short-term scheduler (or CPU scheduler) selects from
among the processes that are ready to execute, and allocates the
CPU to one of them.

9. Define Context Switching

Switching the CPU to another process requires saving the state of


the old process and loading the saved state for the new process.
This task is known as a context-switch.

10. Define Threads.

A thread of execution is the smallest sequence of programmed


instructions that can be managed independently by a scheduler,
which is typically a part of the operating system.
11. Define pre-emptive and non-pre-emptive scheduling.

A scheduling is pre-emptive if, once a process has been given the


CPU can taken away.
A scheduling is non pre-emptive if, once a process has been given
the CPU, the CPU cannot be taken away from the process.

12. List out the scheduling algorithms.

First In First Out (FIFO)


Round Robin
Priority Based Scheduling
Shortest Job First
Multi-level feedback queue

13. Define Dispatcher.

The dispatcher is the part of the scheduler that performs context


switching and changes the flow of execution.
Depending on how the kernel is first entered, dispatching can
happen differently.

14. List out the 5 basic system calls provided by UNIX for file I/O.

int open
int close
int read
int write
off_t seek

15. List out the scheduling criteria.

Fairness
Policy Enforcement
CPU Utilization(Response time, Turnaround, Throughput)
5 Marks
1. Explain Operating System Services.
Program Execution
I/O operations
File System Manipulation
Communications
Error Detection
File Management
I/O System Management
Secondary Storage Management
Networking
Protection System.

Program execution

A process includes the complete execution context (code to execute, data


to manipulate, registers, OS resources in use). Following are the major activities
of an operating system with respect to program management −

• Loads a program into memory.


• Executes the program.
• Handles program's execution.
• Provides a mechanism for process synchronization.

I/O Operation

An I/O subsystem comprises of I/O devices and their corresponding


driver software. Drivers hide the peculiarities of specific hardware devices from
the users.

• I/O operation means read or write operation with any file or any
specific I/O device.
• Operating system provides the access to the required I/O device
when required.

File system manipulation

A file represents a collection of related information. Computers can store files


on the disk (secondary storage), for long-term storage purpose.
• Program needs to read a file or write a file.
• The operating system gives the permission to the program for
operation on file.
• Permission varies from read-only, read-write, denied and so on.

Communication

The OS handles routing and connection strategies, and the problems of


contention and security. Following are the major activities of an operating
system with respect to communication −

• Two processes often require data to be transferred between them


• Both the processes can be on one computer or on different
computers, but are connected through a computer network.
• Communication may be implemented by two methods, either by
Shared Memory or by Message Passing.

Error handling

Errors can occur anytime and anywhere. An error may occur in CPU, in
I/O devices or in the memory hardware. Following are the major activities of an
operating system with respect to error handling −

• The OS constantly checks for possible errors.


• The OS takes an appropriate action to ensure correct and consistent
computing.

Resource Management

In case of multi-user or multi-tasking environment, resources such as main


memory, CPU cycles and files storage are to be allocated to each user or job.

Protection

Protection refers to a mechanism or a way to control the access of programs,


processes, or users to the resources defined by a computer system. Following
are the major activities of an operating system with respect to protection −

• The OS ensures that all access to system resources is controlled.


• The OS ensures that external I/O devices are protected from invalid
access attempts.
• The OS provides authentication features for each user by means of
passwords.
2. Explain Process States.

New - The process is in the stage of being created.


Ready - The process has all the resources available that it needs to
run, but the CPU is not currently working on this process's
instructions.
Running - The CPU is working on this process's instructions.
Waiting - The process cannot run at the moment, because it is waiting
for some resource to become available or for some event to occur.
Terminated - The process has completed.

3. Describe the Process Control Block.

There is a Process Control Block for each process, enclosing all the
information about the process. It is a data structure, which contains the
following:

Process State - It can be running, waiting etc.


Process ID and parent process ID.
CPU registers and Program Counter. Program Counter holds the
address of the next instruction to be executed for that process.
CPU Scheduling information - Such as priority information and pointers
to scheduling queues.
Memory Management information - Eg. page tables or segment tables.
Accounting information - user and kernel CPU time consumed, account
numbers, limits, etc.
I/O Status information - Devices allocated, open file tables, etc.
4. Explain the Criteria that are used for comparing CPU-Scheduling
algorithms.

CPU utilization

To make out the best use of CPU and not to waste any CPU cycle, CPU
would be working most of the time(Ideally 100% of the time).

Throughput

It is the total number of processes completed per unit time or rather say
total amount of work done in a unit of time.

Turnaround time

It is the amount of time taken to execute a particular process, i.e. The


interval from time of submission of the process to the time of completion
of the process (Wall clock time).

Waiting time

The sum of the periods spent waiting in the ready queue amount of time a
process has been waiting in the ready queue to acquire get control on the
CPU.

Load average

It is the average number of processes residing in the ready queue waiting


for their turn to get into the CPU.

Response time

Amount of time it takes from when a request was submitted until the first
response is produced. Remember, it is the time till the first response and
not the completion of process execution (final response).
5. Explain Round Robin Algorithm.

A fixed time is allotted to each process, called quantum,, for execution.


Once a process is executed for given time period that process is pre pre-
empted and other process executes for given time period.
Context switching is used to save states of pre-empted
pre empted processes.

6. Describe the concept Virtual Machines.

A virtual machine is a program that acts as a virtual computer. It runs on


your current operating system – the “host” operating system – and
provides virtual hardware to “guest” operating systems.
The guest operating systems run in windows on your host operating
op
system, just like any other program on your computer.
Virtual machines provide their own virtual hardware, including a virtual
CPU, memory, hard drive, network interface, and other devices.
The virtual hardware devices provided by the virtual machine
machine are mapped
to real hardware on your physical machine.
For example, a virtual machine’s virtual hard disk is stored in a file
located on your hard drive.
The main advantages of virtual machines:

• Multiple OS environments can exist simultaneously on the same


machine, isolated from each other;
• Virtual machine can offer an instruction set architecture that differs
from real computer's;
• Easy maintenance, application provisioning, availability and
convenient recovery.

The main disadvantages:

• When multiple virtual machines are simultaneously running on a host


computer, each virtual machine may introduce an unstable
performance, which depends on the workload on the system by other
running virtual machines;
• Virtual machine is not that efficient as a rreal
eal one when accessing the
hardware.

10 Marks

1. Discuss the Various issues related with multithread programs.

Thread is an execution unit which consists of its own program counter,


a stack, and a set of registers. Threads are also known as Lightweight
processes.

Types of Thread

There are two types of threads:

• User Threads
• Kernel Threads

User threads are above the kernel and without kernel support. These are the
threads that application programmers use in their programs.
Kernel threads are supported within the kernel of the OS itself.

Multithreading Models

The user threads must be mapped to kernel threads, by one of the following
strategies.

• Many-To-One Model
• One-To-One Model
• Many-To-Many Model

Many-To-One Model

• In the many-to-one model, many user-level threads are all mapped onto a
single kernel thread.
• Thread management is handled by the thread library in user space, which
is efficient in nature.

One-To-One Model

• The one-to-one model creates a separate kernel thread to handle each and
every user thread.
• Most implementations of this model place a limit on how many threads
can be created.
• Linux and Windows from 95 to XP implement the one-to-one model for
threads.
Many-To-Many Model

• The many-to-many model multiplexes any number of user threads onto


an equal or smaller number of kernel threads, combining the best features
of the one-to-one and many-to-one models.
• Users can create any number of the threads.
• Blocking the kernel system calls does not block the entire process.
• Processes can be split across multiple processors.

Benefits of Multithreading

1. Responsiveness
2. Resource sharing, hence allowing better utilization of resources.
3. Economy. Creating and managing threads becomes easier.
4. Scalability. One thread runs on one CPU. In Multithreaded processes,
threads can be distributed over a series of processors to scale.
5. Context Switching is smooth
Multithreading Issues

Thread Cancellation.
Cancellation
Signal Handling.
Fork () System Call..
Security Issues because of extensive sharing of resources between
multiple threads.

2. Describe Scheduling Algorithm.


First In First Out (FIFO)
Round Robin
Priority Based Scheduling
Shortest Job First
Multi-level
level feedback queue

First Come First Serve (FCFS)


FCFS) Scheduling

• Jobs are executed on first come, first serve basis.


• Easy to understand and implement.
• Poor in performance as average wait time is high.

Shortest-Job-First(SJF)
First(SJF) Scheduling

• Best approach to minimize waiting time.


• Actual time taken by the process is already known to processor.
• Impossible to implement.
In Pre-emptive
emptive Shortest Job First Scheduling, jobs are put into ready queue as
they arrive, but as a process with short burst time arrives, the existing process is
pre-empted

Priority Scheduling

• Priority is assigned for each process.


• Process with highest priority is executed first and so on.
• Processes with same priority are executed in FCFS manner.
• Priority can be decided based on memory requirements, time
requirements
ements or any other resource requirement.
Round Robin (RR) Scheduling

• A fixed time is allotted to each process, called quantum,, for execution.


• Once a process is executed for given time period that process is pre
pre-
empted and other process executes for given time period.
• Context switching is used to save states of pre-empted
pre empted processes.

Multilevel Queue Scheduling

• Multiple queues are maintained for processes.


• Each queue can have its own scheduling algorithms.
• Priorities are assigned to each queue.
Unit-II
Process Synchronization and Deadlock
2 Marks
1. Define Mutual Exclusion.
If process is executing in its critical section, then no other processes can
be executing in their critical sections."One at a time".

2. Define Progress.
If no process is executing in its critical section and then exist some
processes that wish to enter their critical section, then the selection of
the processes that will enter the critical section next cannot be
postponed indefinitely.

3. Define bounded waiting.


A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.

4. What is meant by semaphore?


The effective synchronization tools often used to realise mutual
exclusion in more complex systems are semaphores. A semaphore S
is an integer variable which can be accessed only through two
standard atomic operations:
Wait
Signal

5. What is meant by critical region?


A critical region is a section of code that is always executed under
mutual exclusion.

6. Define Deadlock.
Deadlocks are a set of blocked processes each holding a resources and
waiting to acquire a resource held by another process.
7. Define Deadlock Prevention.
Deadlock prevention algorithms ensure that at least one of the
necessary conditions.
Mutual Exclusion
Hold and Wait
No Pre-emption
Circular Wait

8. Define Deadlock Avoidance.


This approach to the deadlock problem anticipates deadlock before
actually occurs which guarantees that deadlock cannot occur by
denying one of the necessary conditions of deadlock.

9. List out the four necessary conditions for deadlock.

Mutual Exclusion Condition


Hold and Wait Condition
No Pre-emption Condition
Circular wait Condition

10. List out the Methods for handling deadlock.


Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock

11. Define Resource Allocation Graph.

Deadlock can be modelled with a direct graph. In a deadlock graph,


vertices represented either processes (circles) or resources (squares). A
process which has acquired a resource is show with an arrow (edge) from
the resource to the process.
5 Marks

1. Describe Readers - Writers problem.

Readers writer problem is another example of a classic synchronization


problem. There are many variants of this problem, one of which is examined
below.

Problem Statement:

Here is a shared resource which should be accessed by multiple


processes.
There are two types of processes in this context. They are reader and
writer.
Any number of readers can read from the shared resource
simultaneously, but only one writer can write to the shared resource.
When a writer is writing data to the resource, no other process can
access the resource.
A writer cannot write to the resource if there are non zero number of
readers accessing the resource.

2. Describe Critical Section problem.

A Critical Section is a code segment that accesses shared variables and


has to be executed as an atomic action.
It means that in a group of cooperating processes, at a given point of
time, only one process must be executing its critical section.
If any other process also wants to execute its critical section, it must
wait until the first one finishes.
Solution to Critical Section Problem

A solution to the critical section problem must satisfy the following three
conditions:

Mutual Exclusion

Out of a group of cooperating processes, only one process can be in its


critical section at a given point of time.

Progress

If no process is in its critical section, and if one or more threads want to


execute their critical section then any one of these threads must be
allowed to get into its critical section.

Bounded Waiting

After a process makes a request for getting into its critical section, there is
a limit for how many other processes can get into their critical section,
before this process's request is granted. So after the limit is reached,
system must grant the process permission to get into its critical section.
3. Explain the Concept of Monitors.

Monitor is one of the ways to achieve Process synchronization.


Monitor is supported by programming languages to achieve mutual
exclusion between processes.
For example Java Synchronized methods.methods. Java provides wait() and
notify() constructs.
It is the collection of condition variables and procedures combined
together in a special kind of module or a package.
The processes running outside the monitor can’t access the internal
variable of monitor
itor but can call procedures of the monitor.
Only one process at a time can execute code inside monitors.

Syntax of Monitor

4. Explain deadlock prevention.

Prevent Deadlock by eliminating any of the above four condition.

Eliminate Mutual Exclusion


It is not possible to dis-satisfy
dis satisfy the mutual exclusion because some
resources, such as the tap drive and printer, are inherently non-
non-shareable.
Eliminate Hold and wait
1. Allocate all required resources to the process before start of its
execution, this way hold and wait condition is eliminated but it will lead
to low device utilization.
Process will make new request for resources after releasing the current set
of resources. This solution may lead to starvation.
Eliminate No Pre-emption
emption
Preempt resources from process when resources required by other high
priority process.
Eliminate Circular Wait
each resource will be assigned with a numerical number. A process can
request for the resources only in increasing order of numbering.

10 Marks

1. Elaborate the Deadlock avoidance approach.


Most deadlock avoidance algorithms need every process to tell in
advance the maximum number of resources of each type that it may
need.
If a system is already in a safe state, we can try to stay away from an
unsafe state and avoid deadlock.
Deadlocks cannot be avoided in an unsafe state.
A system can be considered to be in safe state if it is not in a state of
deadlock and can allocate resources
resources upto the maximum available.
A safe sequence of processes and allocation of resources ensures a safe
state.
Deadlock avoidance algorithms try not to allocate resources to a process
if it will make the system in an unsafe state.
Since resource allocation
location is not done right away in some cases, deadlock
avoidance algorithms also suffer from low resource utilization problem.

Consider the image with calm edges as below:


If R2 is allocated to p2 and if P1 request for R2, there will be a deadlock.

2. Elaborate Methods of handling deadlock.


Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock

Following three strategies can be used to remove deadlock after its occurrence.

Pre-emption

We can take a resource from one process and give it to other. This will
resolve the deadlock situation, but sometimes it does cause problems.

Rollback

In situations where deadlock is a real possibility, the system can


periodically make a record of the state of each process and when
deadlock occurs, roll everything back to the last checkpoint, and restart,
but allocating resources differently so that deadlock does not occur.

Kill one or more processes

This is the simplest way, but it works.


Unit-III
Memory Management
2 Marks
1. Define Memory Management.

The memory management system is one of the most important parts of


the operating system.
Memory management is the act of managing computer memory.

2. Define Address Binding.

Address Binding is the process of mapping of logical addresses to


physical addresses.

3. What is meant by Physical Address and Logical Address?

Logical Address generated by CPU. The user can view it.


Physical Address generated by main memory unit. The user can never
view it.

4. Define dynamic loading and linking.

Dynamic loading is the process of loading the program from


secondary storage device to main memory at run time.
Instead of loading the entire program into the main memory, a routine
is loaded only when it is called.
Dynamic Linking is the process of linking the library routines at the
time of execution when an executing program needs it.
5. What is meant by Swapping?

To replace pages or segments of data in memory.


Swapping is a useful technique that enables a computer to execute
programs and manipulate data files larger than main memory.

6. List out the Memory Allocation Techniques.

Contiguous memory Allocation


Single-Partition Allocation
Multiple-Partition Allocation
Non-Contiguous memory Allocation
Paging
Segmentation

7. Define Fragmentation.

Fragmentation occurs in a dynamic memory allocation system when


many of the free blocks are too small to allocate it to the processes
to satisfy its request.

8. Define Paging.

Paging is the memory allocation technique in which it allows the


physical address space of a process to be non-contiguous.
So it is allowing the program to be allocated wherever space
available.

9. List out any two advantages and disadvantages of paging.

Advantages:
Multiprogramming is achieved
Exempted from external fragmentation
Disadvantages:
Additional hardware is required.
Based on the size of the page table, it can be placed either in main
memory or in special register.

10. Define Overlays.

Overlays are the technique that allows the process to be executed is


larger than the memory space it is allocated for that process.
Only needed instructions and data are kept in memory, when other
instructions and data are needed, they are loaded in the space that was
occupied previously by instructions no longer needed.

5 Marks

1. Elaborate the Contiguous memory allocation.


Memory Protection
Memory Allocation
Single-Partition Allocation
Multiple-Partition Allocation
Fragmentation

The operating system and the user’s processes both must be


accommodated in the main memory.
Hence the main memory is divided into two partitions: at one
partition the operating system resides and at other the user processes
reside. In usual conditions, the several user processes must reside in
the memory at the same time, and therefore, it is important to consider
the allocation of memory to the processes.
The Contiguous memory allocation is one of the methods of memory
allocation. In contiguous memory allocation, when a process requests
for the memory, a single contiguous section of memory blocks is
assigned to the process according to its requirement.
2. Discuss the important aspects associated with address binding.
Definition
Compile Time
Load Time
Execution Time

Address binding relates to how the code of a program is stored in


memory. Programs are written in human-readable text, following a
series of rules set up by the structural requirements of the
programming language, and using keywords that are interpreted
into actions by the computer's Central Processing Unit.

Compile Time

The first type of address binding is compiling time address binding.


This allocates a space in memory to the machine code of a
computer when the program is compiled to an executable binary
file.

Load Time

If memory allocation is designated at the time the program is


allocated, then no program can ever transfer from one computer to
another in its compiled state.

Execution Time

Execution time address binding usually applies only to variables in


programs and is the most common form of binding for scripts,
which don't get compiled.

3. Elaborate Physical and Logical Address.


Physical Address
Logical Address
Difference Between Physical and Logical Address

Logical Addresses: Logical addresses are generated by the CPU,


according to the book.
All the compiler does is set up a general sketch of the program layout
and how the image should be laid out, but doesn't assign any real
addresses to it.
When the program is executed the CPU takes this layout image that
the compiler made and hands out some addresses (logical ones) to the
ones generated from the code.
Physical Addresses: The physical addresses are not generated until
after the CPU generates some set of logical addresses (consisting of a
base address and an offset).
The logical addresses go through the MMU or another device and
somewhere along the line the logical addresses are mapped to physical
RAM addresses.

4. Explain Dynamic Loading and Linking.


Dynamic Loading
Dynamic Linking
Advantages
Disadvantages

Dynamic loading means loading the library (or any other binary for that
matter) into the memory during load or run-time.
Dynamic loading can be imagined to be similar to plugins , that is an exe
can actually execute before the dynamic loading happens(The dynamic
loading for example can be created using Load Library call in C or C++)
Dynamic linking refers to the linking that is done during load or run-time
and not when the exe is created.
In case of dynamic linking the linker while creating the exe does minimal
work.
For the dynamic linker to work it actually has to load the libraries too.
Hence it's also called linking loader.
5. Explain Fragmentation.

1. Internal Fragmentation
2. External Fragmentation

Fragmentation occurs in a dynamic memory allocation system when


many of the free blocks are too small to satisfy any request.
External Fragmentation: External Fragmentation happens when a
dynamic memory allocation algorithm allocates some memory and a
small piece is left over that cannot be effectively used.
If too much external fragmentation occurs, the amount of usable memory
is drastically reduced. Total memory space exists to satisfy a request, but
it is not contiguous.
Internal Fragmentation: Internal fragmentation is the space wasted
inside of allocated memory blocks because of restriction on the allowed
sizes of allocated blocks.
Allocated memory may be slightly larger than requested memory; this
size difference is memory internal to a partition, but not being used

10 Marks

1. Explain the Paging memory management scheme.

A computer can address more memory than the amount physically


installed on the system.
This extra memory is actually called virtual memory and it is a
section of a hard that's set up to emulate the computer's RAM.
Paging technique plays an important role in implementing virtual
memory.
Paging is a memory management technique in which process
address space is broken into blocks of the same size called pages
(size is power of 2, between 512 bytes and 8192 bytes).
The size of the process is measured in the number of pages.
Main memory is divided into small fixed-sized blocks of (physical)
memory called frames and the size of a frame is kept the same as
that of a page to have optimum utilization of the main memory and
to avoid external fragmentation
2. Explain Overlays and Swapping.

Overlaying means "the process of transferring a block of program code


or other data into internal memory, replacing what is already stored".
Overlaying is a technique that allows programs to be larger than the
computer's main memory.
An embedded would normally use overlays because of the limitation of
physical memory which is internal memory for a system-on-chip and the
lack of virtual memory facilities.
Overlaying requires the programmers to split their object code to into
multiple completely-independent sections, and the overlay manager that
linked to the code will load the required overlay dynamically & will swap
them when necessary.

Swapping:

A process must be in physical memory for execution.


A process can be swapped temporarily out of physical memory to a
backing store and then brought back into memory for continued
execution.
This applies specifically to multitasking environments where multiple
processes are to be executed at the same time, and hence a cpu scheduler
is implemented to decide which process to swap to the backing store.
Unit - IV
Virtual Memory Management and File System
2 Marks
1. Define virtual memory.

Virtual memory is the method of executing a program requires more


memory than it is available.
I.e. it allows a logical memory space for processes to be much larger
than the total physical memory space then the program must be
broken up into separate independent sections and swapped in and out
of memory by OS.

2. Define demand paging.

Demand paging is the type of swapping in which pages of data are not
copied from disk to memory until they are needed.

3. Define Thrashing.

A process is said to thrash if it spends more time for paging than


executing the process. The high paging activity is called thrashing.

4. What is meant by page-fault?

Page-fault is an interrupt to the software raised by the hardware, when


a program accesses a page that is mapped in address space, but not
loaded into the main memory.

5. What is file attributes?

Information about file is kept in the directory structure, which is


maintained on the disk.
This information’s are called file attributes from user point of view.
6. List out the operations performed on Files.

Create
Open
Write/Read
Reposition (Seek)
Delete (unlink)
Truncate
Close

7. List out the File allocation methods.

Contiguous allocation
Linked allocation
Indexed allocation

8. List out the techniques used to maintain the free space list.

Bit Vector
Linked List
Grouping
Counting

9. Draw the structure of file control block.


10. List out the methods for designing the logical structures of directories.

Single-level directory
Two level directory
Tree Structured directory
Graph directory

11. List out the operations performed on Files

Search for a File


Create a file
Delete a file
List a directory
Rename a file
Traverse the file system

5 Marks

1. Describe page replacement algorithms.

Page Replacement Algorithm


FIFO Page Replacement Algorithm
Optimal Page Replacement Algorithm
Least Recently Used Algorithm

Page replacement algorithms are the techniques using which an


Operating System decides which memory pages to swap out, write to
disk when a page of memory needs to be allocated. Paging happens
whenever a page fault occurs and a free page cannot be used for
allocation purpose accounting to reason that pages are not available or
the number of free pages is lower than required pages.
First In First Out (FIFO) algorithm :Oldest page in main memory is
the one which will be selected for replacement.
Easy to implement, keep a list, replace pages from the tail and add new
pages at the head.
Optimal Page algorithm An optimal page-replacement algorithm has the
lowest page-fault rate of all algorithms. An optimal page-replacement
algorithm exists, and has been called OPT or MIN.
Replace the page that will not be used for the longest period of time. Use
the time when a page is to be used.

Least Recently Used (LRU) algorithm Page which has not been used
for the longest time in main memory is the one which will be selected for
replacement.
Easy to implement, keep a list, replace pages by looking back into time.
2. Describe File allocation methods.

Contiguous allocation

Advantage: Contiguous allocation is easy to implement.


Disadvantage: It can be considered as a form of dynamic memory
allocation, and external fragmentation may occur and compaction may be
needed.
It is difficult to estimate the file size.

Linked allocation
With the linked allocation approach, disk blocks of a file are ch chained
together with a linked-list.
linked
The directory entry of a file contains a pointer to the first block and a
pointer to the last block
Indexed allocation

Each file has an index block that is an array of disk block addresses.
The i-th
th entry in the index block points to the i-th
i th block of the file.
A file’s directory entry contains a pointer to its index.
Hence, the index block of an indexed allocation plays the same role as
the page table. Index allocation supports both sequential
sequential and direct access
without external fragmentation.
3. Explain file system structure.
Definition
File System Structure Diagram
File control Block

File structure
Logical storage unit
Collection of related information
File system resides on secondary storage (disks)
File system organized into layers
File control block
storage structure with of information about a file

Layered File System 1 File Control Block 1

File Structure

A File Structure should be according to a required format that the operating


system can understand.

A file has a certain defined structure according to its type.


A text file is a sequence of characters organized into lines.
A source file is a sequence of procedures and functions.
An object file is a sequence of bytes organized into blocks that are
understandable by the machine.
When operating system defines different file structures, it also contains
the code to support these file structure. UNIX, MS-DOS support
minimum number of file struc
4. Describe File Attributes.

File Attribute File Attribute Description


Protection Who can access the file and in what way
Password Needed just to access the file
Creator Person ID who created the file
Owner Current owner
Read-only flag 0 for read/write and 1 for red only
Archive flag 0 for has been backed up and 1 for needs to be backed up
ASCII/binary flag 0 for ASCII file and 1 for binary file
Hidden flag 0 for normal and 1 for don't display in listings
System flag 0 for normal files and 1 for system file
Random access flag 0 for sequential access only and 1 for random access
Temporary flag 0 for normal and 1 for delete file on process exit
Lock flags 0 for unlocked and non-zero for locked
Key length Number of bytes in key field
Creation time Date and time, the file was created
Time of last access Date and time, the file was last accessed
Record length Number of bytes in a record
Key position Offset of key within each record
Time of last change Date and time, the file has last changed
Current size Number of bytes in the file
Maximum size Number of bytes, the file may grow to

5. Describe File Operations.

Creating a file: First, a space in the file system must be found for the
file. Second, an entry for the new file must be made in the directory.
Writing a file: To write a file, a system call is made specifying both the
name and the file and the information to be written to the file
Reading a file:To read a file; a system call is made that specifies that
specifies the name of the file and where (in memory) the next block of the
file should be put.
Resetting a file: The directory is searched for the appropriate entry, and
the current file position is reset to the beginning of the file.
Deleting a file: To delete a file, the directory is searched for the named
file. Having found the associated directory entry, the space allocated to
the file is released (so it can be reused by other files) and invalidates the
directory entry
10 Marks

1. Compare various file access methods.


Sequential Access Method
Direct Access Method
Indexed Access Method

• Sequential Access:

This is the most common method.


Here the information present in the file is accessed in a sequential
fashion, one record after the other.
It is a very common approach which is used by editors and compilers
usually.
The Read and Write operations form the major part of the operations
done on a file.
A read operation reads the next portion of the file and automatically
advances the file pointer, which tracks the I/O location.
A write operation appends to the end of the file and advances to the end
of the newly written material.

• Direct Access

This type of access method provides a speedy access to the file. It


provides immediate access to large amount of information.
Here a file is made up of logical records that allow programs to read and
write.
It allows the programs to read and write the records in a rapid manner in
no particular (or pre-defined) order.
It is based on the disk-model of a file, as a disk allows random access to
any block.
For direct access, we can view the file as a numbered sequence of blocks
or records.
This method is usually used in databases.

• Indexed access:

This method is built on top of Direct access method.


Here an index contains the pointers to various blocks of the file.
2. Describe demand paging.

A demand paging system is quite similar to a paging system with


swapping where processes reside in secondary memory and pages are
loaded only on demand, not in advance.
When a context switch occurs, the operating system does not copy any of
the old program’s pages out to the disk or any of the new program’s
pages into the main memory Instead, it just begins executing the new
program after loading the first page and fetches that program’s pages as
they are referenced.

Advantages

Following are the advantages of Demand Paging −

• Large virtual memory.


• More efficient use of memory.
• There is no limit on degree of multiprogramming.

Disadvantages

• Number of tables and the amount of processor overhead for handling


page interrupts are greater than in the case of the simple paged
management techniques.
Unit - V

I/O Systems, Secondary Storage Structures, Protection and Security

2 Marks

1. List out the registers of I/O port.

Status
Control
Data-in
Data-out

2. List out the services provided by kernel I/O subsystem.

I/O scheduling
Buffering
Caching
Spooling
Device Reservation
Error Handling

3. What is a controller?

Controller is a collection of electronics that can operate a port, a bus,


or a device.
A single port controller is a single chip that is used to control the
signals os a serial port.

4. List out the disk scheduling algorithms.

First In First Out (FIFO) Disk Scheduling Algorithm


Shortest Seek Time First (SSTF) Disk Scheduling Algorithm
SCAN
C-SCAN
LOOK
C-LOOK
5. Define Access matrix.

Access matrix is defined as the view of protection as in the form of a


matrix where as Rows represent domains and Columns represent
objects.

6. What is meant by threat monitoring?

Threat Monitoring is the Continuous analysis, assessment, and


review of security related data collected from all sources for
detecting any attempted or successful breach of the security of
operation or system.

7. What is Authentication?

The protection system depends on an ability to identify the programs


and processes that are executing. Object access is performed by
processes.
To apply for access controls, it is necessary to associate processes
with users.

8. List out the use of Access Matrix.

Set of Domain is dynamic


Set of object is dynamic
User dictates policy
who can access what object and in what mode

9. List out the Authentication techniques.

Passwords
Cryptographic or encrypted password
one time password
biometrics

10. Define Buffering.

Buffering is a temporary storage area, usually in RAM used to store


data while transferring between devices or between applications.
Buffer is used to cope with device speed mismatch between the sender
and the receiver and to cope with device transfer size mismatch.
5 Marks

1. Explain I/O Hardware.

Controller: A controller is a collection of electronics that can operate a


port, a bus, or a device.
A serial-port controller is an example of a simple device controller. This
is a single chip in the computer that controls the signals on the wires of a
serial port.
I/O port : An I/O port typically consists of four registers, called the
status , control, data-in, and data-out registers
• Status Register The status register contains bits that can be read
by the host. These bits indicate states such as whether the current
command has completed, whether a byte is available to be read
from the data-in register, and whether there has been a device error
• Control register The control register can be written by the host to
start a command or to change the mode of a device.
• Data-in register the data-in register is read by the host to get
input.
• Data-out register the data out register is written by the host to
send output.
Polling is a process by which a host waits for controller response.
It is a looping process, reading the status register over and over until the
busy bit of status register becomes clear.
Interrupts: Interrupts allow devices to notify the CPU when they have
data to transfer or when an operation is complete, allowing the CPU to
perform other duties when no I/O transfers need its immediate attention.
The CPU has an interrupt-request line that is sensed after every
instruction.
.
2. Describe the Life cycle of an I/O Request.

Transforming I/O Requests to Hardware Operations: Users request


data using file names, which must ultimately be mapped to specific
blocks of data from a specific device managed by a specific device driver.
DOS uses the colon separator to specify a particular device ( e.g. C:,
LPT:, etc. )
UNIX uses a mount table to map filename prefixes ( e.g. /usr ) to specific
mounted devices.
UNIX uses special device files, usually located in /dev, to represent and
access physical devices directly.

Life Cycle of an I/O request 1


3. Elaborate Security problems.

Some of the most common types of violations include:

Breach of Confidentiality - Theft of private or confidential information,


such as credit-card numbers, trade secrets, patents, secret formulas,
manufacturing procedures, medical information, financial information,
etc.
Breach of Integrity - Unauthorized modification of data, which may
have serious indirect consequences.
Breach of Availability - Unauthorized destruction of data, often just for
the "fun" of causing havoc and for bragging rights.
Theft of Service - Unauthorized use of resources, such as theft of CPU
cycles, installation of daemons running an unauthorized file server, or
tapping into the target's telephone or networking services.
Denial of Service, DOS - Preventing legitimate users from using the
system, often by overloading and overwhelming the system with an
excess of requests for service.

There are four levels at which a system must be protected:


Physical - The easiest way to steal data is to pocket the backup tapes.
Human - There is some concern that the humans who are allowed access
to a system be trustworthy, and that they cannot be coerced into
breaching security.
o Phishing involves sending an innocent-looking e-mail or web site
designed to fool people into revealing confidential information.
o Dumpster Diving involves searching the trash or other locations
for passwords that are written down.
Password Cracking involves divining users passwords, either by
watching them type in their passwords, knowing something about them
like their pet's names, or simply trying all words in common dictionaries.
Operating System - The OS must protect itself from security breaches,
such as runaway processes ( denial of service ), memory-access
violations, stack overflow violations, the launching of programs with
excessive privileges, and many others.
Network - As network communications become ever more important and
pervasive in modern computing environments, it becomes ever more
important to protect this area of the system

4. Explain Threat and Threat Monitoring.

Threat Monitoring: Threat monitoring refers to a type of solution or


process dedicated to continuously monitoring across networks and/or
endpoints for signs of security threats such as attempts at intrusions or
data exfiltration.
Threat monitoring gives technology professionals visibility into the
network and the actions of the users who access it, enabling stronger data
protection as well as preventing or lessening of the damages caused by
breaches.
Benefits of Threat Monitoring: Using threat monitoring enables
organizations to identify previously undetected threats such as outsiders
connecting to or exploring networks and compromised or unauthorized
internal accounts

10 Marks
1. Describe the Disk Scheduling Algorithms.
First In First Out (FIFO) Disk Scheduling Algorithm
Shortest Seek Time First (SSTF) Disk Scheduling Algorithm
SCAN
C-SCAN
LOOK
C-LOOK

First Come -First Serve (FCFS): All incoming requests are placed at the
end of the queue.
Whatever number that is next in the queue will be the next number served.
Using this algorithm doesn't provide the best results.
To determine the number of head movements you would simply find the
number of tracks it took to move from one request to the next.

Shortest Seek Time First (SSTF): In this case request is serviced according
to next shortest distance.
Starting at 50, the next shortest distance would be 62 instead of 34 since it
is only 12 tracks away from 62 and 16 tracks away from 34. The process
would continue until all the process is taken care of.

Elevator (SCAN): This approach works like an elevator does.


It scans down towards the nearest end and then when it hits the bottom it
scans up servicing the requests that it didn't get going down.
If a request comes in after it has been scanned it will not be serviced until
the process comes back down or moves back up.
Circular Scan (C-SCAN): Circular scanning works just like the elevator
to some extent. It begins its scan toward the nearest end and works its
way all the way to the end of the system.
Once it hits the bottom or top it jumps to the other end and moves in the
same direction.
Keep in mind that the huge jump doesn't count as a head movement. The
total head movement for this algorithm is only 187 track, but still this
isn't the most sufficient.

C-LOOK: This is just an enhanced version of C-SCAN. In this the


scanning doesn't go past the last request in the direction that it is moving.
It too jumps to the other end but not all the way to the end.
Just to the furthest request. C-SCAN had a total movement of 187 but this
scan (C-LOOK) reduced it down to 157 tracks
.
2. Elaborate the concept Authentication.

Authentication: Authentication refers to identifying each user of the


system and associating the executing programs with those users.
It is the responsibility of the Operating System to create a protection
system which ensures that a user who is running a particular program is
authentic. Operating Systems generally identifies/authenticates users
using following three ways −
o Username / Password − User need to enter a registered username
and password with Operating system to login into the system.
o User card/key − User need to punch card in card slot, or enter key
generated by key generator in option provided by operating system
to login into the system.
o User attribute - fingerprint/ eye retina pattern/ signature −
User need to pass his/her attribute via designated input device used
by operating system to login into the system.
One Time passwords: One-time passwords provide additional security
along with normal authentication. In One-Time Password system, a
unique password is required every time user tries to login into the system.
Once a one-time password is used, then it cannot be used again. One-time
password is implemented in various ways.
o Random numbers − Users are provided cards having numbers
printed along with corresponding alphabets. System asks for
numbers corresponding too few alphabets randomly chosen.
o Secret key − User are provided a hardware device which can
create a secret id mapped with user id. System asks for such secret
id which is to be generated every time prior to login.
o Network password − Some commercial applications send one-
time passwords to user on registered mobile/ email which is
required to be entered prior to login.

3. Explain the concept Cryptography.

Two big questions of security:

Trust - How can the system be sure that the messages received are really
from the source that they say they are, and can that source be trusted?
Confidentiality - How can one ensure that the messages one is sending
are received only by the intended recipient?
Cryptography can help with both of these problems, through a system of
secrets and keys. In the former case, the key is held by the sender, so that
the recipient knows that only the authentic author could have sent the
message; in the latter, the key is held by the recipient, so that only the
intended recipient can receive the message accurately.
Keys are designed so that they cannot be divined from any public
information, and must be guarded carefully. (Asymmetric encryption
involve both a public and a private key. )
Encryption: The basic idea of encryption is to encode a message so that
only the desired recipient can decode and read it.
Encryption has been around since before the days of Caesar, and is an
entire field of study in itself. Only some of the more significant computer
encryption schemes will be covered here.
The steps in the procedure and some of the key terminology are as
follows:

• The sender first creates a message, m in plaintext.


• The message is then entered into an encryption algorithm, E,
along with the encryption key, Ke.
• The encryption algorithm generates the ciphertext, c, = E(Ke)(m).
For any key k, E(k) is an algorithm for generating ciphertext from a
message, and both E and E(k) should be efficiently computable
functions.
• The ciphertext can then be sent over an unsecure network, where it
may be received by attackers.
• The recipient enters the ciphertext into a decryption algorithm,
D, along with the decryption key, Kd.
• The decryption algorithm re-generates the plaintext message, m, =
D(Kd)(c). For any key k, D(k) is an algorithm for generating a
clear text message from a ciphertext, and both D and D(k) should
be efficiently computable functions.
• The algorithms described here must have this important property:
Given a ciphertext c, a computer can only compute a message m
such that c = E(k)(m) if it possesses D(k).

You might also like