Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Question (1) What is an operating system? State their functions.

Answer:
An operating system is a program that controls the execution of an application program and acts as an interface
between the user and computer hardware. Operating system (OS) performs basic tasks such as controlling and allocating
memory, prioritizing the processing of instructions, controlling input and output devices, facilitating networking, and
managing files, thus purpose of an operating system is to provide an environment in which user can execute programs in
a convenient and efficient manner.
Functions of an operating system are as follows:
To hide details of hardware.
Resource management
Provides a effective user interface
Program creation
Program execution
Access to Input/output devices
Controlled access to files
Interpreting the commands
Managing peripherals
Networking
Therefore in detail above mentioned points are the main functions of any operating system and mostly all the operating
performs these functions in detail such as Booting the computer Performs basic computer tasks e.g managing the
various peripheral devices e.g. mouse, keyboard .Provides a user interface, e.g. command line, graphical user interface
(GUI) Handles system resources such as computer's memory and sharing of the central processing unit (CPU) time by
various applications or peripheral devices Provides file management which refers to the way that the operating system
manipulates, stores, retrieves and saves data

Question (2) State Advantages and Disadvantages of Threads over multiple processes.
Answer:
A process can be simply defined as a program in execution. Where as a thread is a single sequence stream within a
process.
Advantages of thread over multiple processes are as follows:

Threads are memory efficient.


Threads are inexpensive to create and destroy, and they are inexpensive to represent.
Thread task switching time is faster, since a thread has fewer contexts to save than a process.
Threads allow the sharing of a lot resources that cannot be shared in process
Threads share a common program space

Disadvantages of Threads over multiple processes: Threads are typically not loadable. That is, to add a new thread, you must add the new thread to the source
code, then compile and link to create the new executable. Processes are loadable, thus allowing a multi-tasking
system to be characterized dynamically
There is lack of coordination between threads and operating system kernel.
User level threads require non-blocking system calls i.e. if one thread causes a page fault, the process blocks.
Since there is, an extensive sharing among threads there is a potential problem of security. I

Question (3) Write a note on:


1. Deadlock detection and recovery
2. Deadlock avoidance
3. Deadlock prevention
4. Livelock
Answer:
A set of processes or threads is deadlocked when each processes or thread is waiting for a resource to be freed which is
controlled by another process. Deadlocks occur most commonly in multitasking.
Deadlock Detection and Recovery
Under Deadlock detection, Deadlocks are allowed to occur. Then the system is examined to detect that a deadlock has
occurred and subsequently it is corrected. An algorithm is employed that tracks resource allocation and process states,
it rolls back and restarts one or more of the processes in order to remove the detected deadlock. If there is only one
instance of each resource. It is possible to detect deadlock by constructing a resource allocation/request graph and
checking for cycles.
Once Deadlock has been detected, it can be corrected using any of these methods:
Preemption - We can take an already allocated resource away from a process and give it to another process.
Rollback In a situation where deadlock is real possibility, the system can make a record of the state of each
process and when deadlock occurs, roll everything back to the last checkpoint and restart.
We can choose to abort one process at a time until the deadlock is resolved or kill one or more processes; this is
the simplest and most effective.
Deadlock Avoidance
Deadlock can be avoided if certain information about processes are available to the operating system before allocation
of resources, such as which resources a process will consume. For every resource request, the system sees whether
granting the request will mean that the system will enter an unsafe state, meaning a state that could result in deadlock.
The system then only grants requests that will lead to safe states. In order for the system to be able to determine
whether the next state will be safe or unsafe, it must know in advance at any time:
Resource currently available
Resource currently allocated to each process
Resource that will be required and released by these processes in future.
One known algorithm that is used for deadlock avoidance is the Banker's algorithm. However, for many systems it is
impossible to know in advance what every process will request. This means that deadlock avoidance is often impossible.
Deadlock Prevention
Deadlock prevention strategies involve changing the rules so that processes will not make requests that could result in
deadlock. Another strategy is to require all processes to request all of their resources at once, and either all are granted
or none are granted.
Livelock
Livelock is a variant of deadlock. This is a situation in which two or more processes continuously change their state in
response to changes in the other processes without doing any useful work.

Question (4) What are Semaphores? How can we achieve mutual exclusion using Semaphores?
Answer:
Semaphore is a mechanism that prevents two or more processes from accessing a shared resource simultaneously.
Dijkstra in 1965 proposed semaphores as a solution to the problems of concurrent processes. The fundamental principle
is that two or more processes can cooperate by means of simple signals, such that a process can be forced to stop as a
specified place until it has received a specific signal.
Mutual exclusion using Semaphores:
Mutual exclusion is a way of making sure that if one process is using a shared modifiable data, the other processes will
be excluded from doing the same thing. Mutual exclusion can be achieved by various ways and semaphores are one of
them.
Following example illustrates mutual exclusion using semaphores:
A process before entering in to its critical section, performs wait (mutex) operating and after coming out of critical
section, signal (mutex) operation, thus achieving mutual exclusion.
Shared data:
Semaphore mutex;
// initially mutex = 1
Process: Pi:
Do
{
Wait (mutex);
/* critical section */
Signal (mutex);
/* remainder section */
} while (1);

Question (5) Describe the File Structure? Explain the various access modes?
Answer:
File structure:
File system and space management is an integral part of the operating systems. This section covers the file management
and space management systems, which includes the file structure.
Files are used for storing the Information of the user; But Files are organized into the System by using a Specific Manner.
Generally for arranging all the Files, directories or Folders are used. UNIX hides the chunkiness of tracks, sectors, etc.
and presents each file as a smooth array of bytes with no internal structure. Application programs can, if they wish,
use the bytes in the file to represent structures. For example, a wide-spread convention in UNIX is to use
the newline character (the character with bit pattern 00001010) to break text files into lines. Some other systems
provide a variety of other types of files. The most common are files that consist of an array of fixed or variable
size records and files that form an index mapping keys to values. Indexed files are usually implemented as B-trees
Various access modes:
System supports various access modes for operation on a file.
Sequential. Read or write the next record or next n bytes of the file. Usually, sequential access also allows
a rewind operation
Random. Read or write the nth record or bytes i through j. UNIX provides an equivalent facility by adding
a seek operation to the sequential operations listed above. This packaging of operations allows random access
but encourages sequential access
Indexed. Read or write the record with a given key. In some cases, the key need not be unique--there can be
more than one record with the same key. In this case, programs use a combination of indexed and sequential
operations: Get the first record with a given key, and then get other records with the same key by doing
sequential reads.

Question (6) Explain the importance of direct memory access.


Answer:
Direct memory access is system that can control the memory system without using the CPU. While most data that is
input or output from your computer is processed by the CPU, some data does not require processing, or can be
processed by another device. In these situations, DMA can save processing time and is a more efficient way to move
data from the computer's memory to other devices. In order for devices to use direct memory access, they must be
assigned to a DMA channel. Each type of port on a computer has a set of DMA channels that can be assigned to each
connected device. For example, a PCI controller and a hard drive controller each have their own set of DMA channels.
DMA is a sensible approach for devices which have the capability of transferring blocks of data at a very high data rate,
in short bursts. It is not worthwhile for slow devices, or for devices which do not provide the processor with large
quantities of data.
A DMA is also invented to solve the problem of interrupt latency, the amount of time that it takes between the
hardware device raising the interrupt and the device drivers interrupt handling routine being called is low. As DMA
controller allows devices to transfer data to or from the system memory without the intervention of the processor.

You might also like