Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Assignment-1

Max Marks: 10 Due Date: 10 July, 2023

Q.1. What is an Operating System? What are the functions of the operating system? Explains
different types of operating systems? And discuss the Characteristics of Operating Systems.

Ans1.

An operating system (OS) is a software program that manages computer hardware and software
resources and provides services for computer programs. It acts as an intermediary between the
user and the computer hardware, allowing users to interact with the computer system in a
convenient and efficient manner. An operating system is similar to a government. Like a
government, it performs no useful function by itself. It simply provides an environment within
which other programs can do useful work.

Functions of the operating system :

1. Process Management: The OS manages processes, which are instances of executing


programs. It schedules and prioritizes processes, allocates system resources (such as
CPU time and memory) to them, and facilitates communication and synchronization
between processes.

2. Memory Management: The OS handles memory allocation and management. It keeps


track of which parts of memory are in use and by whom, allocates memory to processes
as needed, and ensures efficient and secure memory access.

3. File Management: The OS provides a file system that organizes and manages files on
storage devices. It allows users to create, read, write, and delete files, and provides
mechanisms for organizing files into directories and protecting them with permissions.

4. Device Management: The OS manages input and output devices such as keyboards, mice,
printers, and disk drives. It provides device drivers that communicate with the hardware,
handles device requests from processes, and ensures proper functioning and resource
allocation for devices.

5. User Interface: The OS provides a user interface through which users can interact with the
computer system. This can be a command-line interface (CLI) where users enter text
commands, or a graphical user interface (GUI) with icons, windows, and menus for
intuitive interaction.

Types of operating systems :

1. Serial Operating System: A serial operating system executes tasks or jobs sequentially,
one after another. Each task must complete before the next one can start. It is a simple
and straightforward approach but does not provide concurrency or parallelism.

2. Batch Operating System: A batch operating system executes a batch of jobs without
requiring continuous user interaction. Users submit their jobs in advance, and the
operating system automatically executes them one after another. It is commonly used in
scenarios where similar jobs can be executed without human input, such as large-scale
data processing.
3. Multiprogramming Operating System: Multiprogramming operating systems allow
multiple programs to be loaded into memory simultaneously. The CPU switches between
these programs, executing a part of each program in a time-sharing manner. This
approach improves CPU utilization and overall system efficiency.

4. Multiprocessing Operating System: A multiprocessing operating system supports the


execution of multiple programs simultaneously on multiple CPUs or processor cores.
Each CPU can execute its own set of instructions independently, enabling true parallel
processing.

5. Multitasking Operating System: Multitasking operating systems provide the ability to run
multiple tasks or processes concurrently on a single CPU. The operating system rapidly
switches between tasks, giving the illusion of simultaneous execution. This allows users
to run multiple applications and switch between them seamlessly.

6. Network Operating System: An Operating system, which includes software and associated
protocols to communicate with other computers via a network conveniently and cost-
effectively, is called Network Operating System. Examples include Windows Server,
Linux-based server distributions, and Novell NetWare.

7. Real-Time Operating System: Real-time operating systems are designed to meet strict
timing constraints and provide deterministic response times. They are commonly used
in systems where tasks must be completed within specific deadlines, such as in industrial
control systems, robotics, or aerospace applications.

8. Time-Sharing Operating System: Time-sharing operating systems allow multiple users to


interact with a computer system concurrently. The CPU time is shared among users or
processes, and each user gets a small time slice to execute their tasks. This enables
interactive computing and is the basis for modern multi-user operating systems.

9. Distributed Operating System: A distributed operating system manages resources and


provides services across multiple interconnected computers or nodes in a network. It
allows users to access remote resources as if they were local and provides transparency
in terms of accessing and managing distributed resources.

Characteristics of operating system :

1. Concurrency: Operating systems manage multiple processes concurrently, allowing them


to run simultaneously and efficiently utilize system resources.

2. Resource Allocation: Operating systems allocate system resources such as CPU time,
memory, and devices among competing processes or users, ensuring fair and efficient
utilization.

3. Security: Operating systems implement security measures to protect the system and user
data from unauthorized access, viruses, and other threats. They enforce user permissions
and utilize authentication mechanisms.

4. Fault Tolerance: Some operating systems incorporate fault tolerance mechanisms to


handle hardware or software failures, ensuring system stability and availability.

5. Extensibility: Operating systems are designed to be extensible, allowing for the addition
of new functionality or customization without significant modifications to the core
system.
Q2. Explain in detail process state diagram and process control block. Explain various attributes of
PCB.
Ans 2.
Process State Diagram :
A process state diagram represents the various states a process can transition through during its
execution in an operating system. It provides a visual representation of the life cycle of a process
and how it interacts with the operating system.
The typical states in a process state diagram are:
1. New: This is the initial state when a process is created but has not yet been admitted to
the system. In this state, the process is waiting to be allocated system resources, such as
memory, and to be assigned a process identifier (PID).
2. Ready: When the process is in the ready state, it is waiting to be assigned to a CPU for
execution. It is in main memory and is eligible for execution. Multiple processes may be
in the ready state, and the operating system's scheduler decides which process to execute
next.
3. Running: In the running state, the process is currently being executed on a CPU. It is
actively running instructions and utilizing system resources. Only one process can be in
the running state on a single CPU at any given time. The process remains in this state until
it either voluntarily relinquishes the CPU or is preempted by the operating system
scheduler.
4. Blocked (or Waiting): When a process is unable to proceed further because it is waiting
for an event to occur, such as completion of an I/O operation or the availability of a
resource, it enters the blocked or waiting state. In this state, the process is not using the
CPU and is temporarily suspended until the event it is waiting for happens.
5. Terminated: When a process completes its execution, it enters the terminated state. In this
state, the process is no longer scheduled for execution, and its resources are released back
to the system. The process may still retain some information, such as exit status or
accounting data, until it is cleaned up by the operating system.
Processes can transition between these states based on various events and system calls. For
example, a process may transition from the New state to the Ready state when it is admitted to
the system, from the Ready state to the Running state when it is assigned a CPU, or from the
Running state to the Blocked state when it requests I/O and has to wait.
Process Control Block :
The Process Control Block (PCB is a data structure used by the operating system to manage and
store information about a process. It is created when a process is created and remains associated
with that process throughout its lifetime. The PCB plays a vital role in process management and
facilitates the operating system's control and coordination of processes.
There are various attributes of PCB including :
1. Process ID: It is a unique identifier assigned to each process by the operating system. The
process ID allows the operating system to distinguish between different processes.
2. Process Counter: It keeps track of the address of the next instruction to be executed in the
process. The process counter is updated during context switches when the CPU switches
between different processes.
3. Process State: It represents the current state of the process, such as running, waiting,
ready, or terminated. The operating system uses the process state to manage process
execution and scheduling.
4. Priority: It indicates the relative importance or priority of a process. The priority value
determines the order in which processes are scheduled for execution. Higher priority
processes are given preferential treatment by the CPU scheduler.
5. Registers: Registers store the current execution context of a process, including the values
of CPU registers such as the program counter, stack pointer, and general-purpose
registers. These values are saved and restored during context switches.
6. List of Open Files: This attribute maintains a list of files that the process has opened for
reading or writing. It allows the process to keep track of its file-related operations and
ensures proper resource management.
7. List of Open Devices: It keeps track of the devices that the process has accessed or opened.
This information is used for device management, allowing the process to interact with
devices like printers, disk drives, or network interfaces.
8. CPU Scheduling Information: It includes details related to the process's scheduling, such
as the scheduling algorithm used, the time quantum assigned, and any other relevant
scheduling parameters. This information helps the operating system in making scheduling
decisions.
9. Memory Management Information: It includes data about the memory allocated to the
process, such as the base address and limit of the process's memory space. This
information is used by the memory management unit to control and manage the process's
memory access.

Q3. How does CPU switch from one process to other?

Ans 3.
The process of switching the CPU from one process to another is known as a context
switch. Here's an explanation of how the CPU switches from one process to another :
1. Interrupt Handling: When an interrupt occurs, such as a timer interrupt or an I/O request,
the CPU interrupts the currently running process to handle the interrupt. The interrupt
causes a trap or an exception, transferring control to the operating system.
2. Saving the Current Process's Context: When an interrupt occurs, the operating system
saves the context of the currently running process, including the values of CPU registers,
the program counter, and other relevant information. This information is stored in the
process's Process Control Block (PCB) so that the process can be resumed later.
3. Selecting a New Process: The operating system selects a new process to run from the pool
of ready processes. The selection can be based on various scheduling algorithms, such as
round-robin, priority-based, or shortest job first. The decision is made considering factors
like process priority, waiting time, or other scheduling criteria.
4. Loading the New Process's Context: Once a new process is selected, the operating system
loads its saved context from its PCB. This includes restoring the values of CPU registers,
the program counter, and other relevant information.
5. Resuming Execution: With the context of the new process loaded, the CPU resumes
execution from the point where the previous process was interrupted. The new process
continues executing its instructions, utilizing the CPU's resources.
This process of context switching allows the CPU to efficiently handle multiple processes
and provide the illusion of simultaneous execution. The operating system manages the
scheduling and switching of processes, ensuring fairness, efficiency, and proper resource
allocation.

Q4. What are different types of schedulers? Explain.

Ans 4.

There are 3 types of schedulers :


1. Long-Term Scheduler : The long-term scheduler is responsible for selecting which
processes from the "New" state should be admitted to the system and moved to the
"Ready" state. It determines the degree of multiprogramming or the number of processes
allowed to run concurrently. The goal of the long-term scheduler is to maintain a balance
between system throughput and resource utilization. It typically runs less frequently as its
decision-making process involves analyzing system loads and available resources.
2. Short-Term Scheduler : The short-term scheduler, also known as the CPU scheduler,
selects a process from the "Ready" state to be executed on the CPU. It decides which
process should be given the CPU time for a specific period. The primary objective of the
short-term scheduler is to provide efficient CPU utilization and fair process execution. It
makes frequent and rapid scheduling decisions, often in a matter of milliseconds or
microseconds. The short-term scheduler uses various algorithms, such as Round Robin,
Shortest Job Next, or Priority Scheduling, to determine the order of process execution.
3. Medium-Term Scheduler: The medium-term scheduler is an optional scheduler that exists
in some operating systems. It performs process swapping or process suspension by
moving processes from main memory to secondary storage (such as the hard disk) and
vice versa. This swapping mechanism helps in managing memory resources efficiently.
The medium-term scheduler is responsible for selecting processes to be swapped out of
main memory to free up space and ensure a sufficient number of ready processes in
memory.

Q5. What is an interrupt in OS? What are different types of interrupts? How interrupt handling is
done in OS?

Ans 5.
An interrupt in an operating system is a signal emitted by hardware or software when an event
or process requires immediate attention. It notifies the processor about a high-priority task that
needs to interrupt the current process in execution. In the case of I/O devices, a dedicated line
on the bus, known as the Interrupt Service Routine (ISR), is used for this purpose.
There are two main types of interrupts:

1. Hardware Interrupts: These interrupts are triggered by external hardware devices. They indicate
a particular condition or request attention from the operating system. Examples include
keyboard inputs, mouse movements, or disk I/O completions.

2. Software Interrupts: Software interrupts, also called traps or exceptions, are generated by
software programs themselves. They are used to indicate specific conditions or request services
from the operating system. These interrupts can be triggered by executing specific instructions
or encountering certain conditions during program execution.

The handling of interrupts in an operating system typically follows the following steps:

1. Interrupt Detection: The processor continuously monitors for interrupt signals from hardware
devices or software-generated interrupts.

2. Interrupt Request (IRQ): When an interrupt is detected, the interrupting device or software
generates an interrupt request (IRQ) to the processor. This IRQ is then sent to the interrupt
controller, which manages and prioritizes the interrupts.

3. Interrupt Vectoring: The interrupt controller assigns a unique interrupt vector to the interrupt
based on its priority. This vector serves as an index to locate the corresponding interrupt handler
routine in the interrupt vector table.

4. Interrupt Handling: Upon receiving the interrupt, the processor suspends the current execution
of instructions, saves the necessary context (such as the program counter and register values),
and loads the address of the appropriate interrupt handler routine from the interrupt vector table.
This transfers control to the interrupt handler routine.

5. Interrupt Service Routine (ISR): The interrupt handler routine, also known as the Interrupt
Service Routine (ISR), contains the specific code to handle the interrupt. It may involve
processing data from the interrupting device, updating system state, or servicing a software
request.

6. Interrupt Completion: Once the interrupt handling is complete, the processor restores the saved
context, including the program counter and register values, and resumes the execution of the
interrupted program or process.

Q6. Explains the UNIX operating system. What is the difference between LINUX and UNIX?
Discuss the architecture of Linux Operating System. What are the Linux operating systems
features?
Ans 6.
UNIX is a multi-user, multi-tasking operating system that was developed in the 1970s at Bell
Laboratories. It provides a stable and robust environment for a wide range of computing tasks.
The UNIX operating system is known for its flexibility, portability, and powerful command-
line interface.

Difference between LINUX and UNIX:


Linux is often referred to as a UNIX-like operating system because it was developed to be
compatible with the POSIX standard, which defines a set of standards for operating systems
based on the original UNIX system. However, there are some key differences between Linux
and traditional UNIX systems:
1. Development: UNIX systems have multiple commercial variants developed by different
vendors, such as Solaris, AIX, and HP-UX. Linux, on the other hand, is an open-source
operating system that has been developed collaboratively by a large community of developers.
2. Licensing: UNIX systems generally have proprietary licenses and may require licensing fees.
Linux is distributed under open-source licenses, such as the GNU General Public License
(GPL), which allows users to freely use, modify, and distribute the operating system.
3. Kernel: UNIX systems typically use their own proprietary kernels, while Linux uses the Linux
kernel. The Linux kernel was developed from scratch, inspired by the design principles of
UNIX.
4. Standards: UNIX systems aim to comply with the Single UNIX Specification (SUS) standard,
which ensures compatibility across different UNIX variants. Linux may not be certified as
UNIX, but it adheres to POSIX standards and provides similar functionality.

Architecture of the Linux Operating System:


The Linux operating system follows a layered architecture, consisting of several key components:
1. Hardware Layer: This layer includes the physical hardware components, such as the CPU,
memory, disks, and network interfaces.
2. Kernel Layer: The kernel is the core component of the operating system. It directly interacts
with the hardware and provides essential services, including process management, memory
management, device drivers, and file system access.
3. System Call Interface: The system call interface allows user programs to interact with the kernel
and request services. It provides a set of functions that user programs can invoke to perform
various tasks.
4. Library Layer: The library layer includes a collection of pre-compiled functions and utilities that
extend the functionality of the kernel. It includes standard C libraries, such as the GNU C
Library (glibc), which provide common functions for application development.
5. Shell and Utilities: The shell is a command-line interpreter that allows users to interact with the
system by entering commands. Linux provides various shells, such as Bash (Bourne Again
Shell), along with a wide range of utilities and tools for managing files, processes, and system
configuration.

Linux Operating System Features:


Linux offers several key features that contribute to its popularity and success:
1. Multi-User and Multi-Tasking: Linux supports multiple users concurrently, allowing multiple
users to log in and run processes simultaneously. It also provides multi-tasking capabilities,
enabling multiple processes to run concurrently.
2. Portability: Linux is highly portable and can run on a wide range of hardware platforms,
including PCs, servers, embedded systems, and mobile devices.
3. Security: Linux has a robust security model, offering features such as user and group
permissions, file encryption, access control lists (ACLs), and secure remote access protocols.
4. Networking: Linux provides comprehensive networking capabilities, supporting various
networking protocols, including TCP/IP. It can function as a network server, router, firewall,
or workstation in a networked environment.
5. Open-Source and Customizability: Linux is open-source, allowing users and developers to
access the source code and modify it to suit their needs. This flexibility and customizability
have led to a vast ecosystem of Linux distributions tailored for different purposes.
6. Stability and Reliability: Linux is known for its stability and reliability, often running for
extended periods without needing to be rebooted. It has a reputation for handling heavy
workloads and critical systems effectively.

Q7. What is the System Call? Explain the System Call Execution. Describe the types of System
Call.
Ans 7.
A system call is a programming interface provided by the operating system that allows
applications to request services from the kernel, which is the core component of the operating
system. System calls provide an abstraction layer between user-level applications and the low-
level hardware and resources of a computer system.
When a program needs to perform a privileged operation or access a resource that requires
kernel-level permissions (such as reading from or writing to a file, creating a new process, or
allocating memory), it makes a system call to request the kernel's assistance. The system call
provides a way for the application to transition from user mode to kernel mode, where it can
execute privileged instructions.
Here's a general overview of the steps involved in executing a system call:
1. User Mode: The application executes in user mode, where it has limited access to system
resources.
2. System Call Request: The application makes a system call by invoking a specific function or
issuing a trap instruction, which transfers control to a predefined entry point in the kernel.
3. Kernel Mode Transition: The processor switches from user mode to kernel mode, granting the
kernel full access to system resources.
4. Kernel Execution: The kernel receives the system call request, validates it, and performs the
requested operation on behalf of the application. This may involve interacting with hardware
devices, managing memory, or performing other privileged operations.
5. Return to User Mode: Once the kernel completes the requested operation, it returns control to
the application by transferring back to user mode.

6. Application Resumes: The application continues execution from the point where the system call
was made, typically with the results of the system call available for further processing.
System calls can be broadly categorized into several types based on the functionality they provide.
Here are some common types of system calls:
1. Process Control: System calls for process management, such as creating and terminating
processes, executing programs, and getting process attributes.
2. File Management: System calls for file-related operations, including creating, opening, closing,
reading, and writing files.
3. Device Management: System calls to interact with devices, such as reading from or writing to
devices, controlling device behavior, and managing device drivers.
4. Information Maintenance: System calls to retrieve or modify system and process information,
such as obtaining the system time, retrieving system configuration, and manipulating system
variables.
5. Communication: System calls for inter-process communication, allowing processes to send
messages, establish communication channels, and synchronize their activities.
System calls form a critical interface between applications and the underlying operating
system, enabling them to leverage the services and resources provided by the kernel.
Q8. Explain the concept of Shell? Explains different types of Shell (sh, csh, bash, ksh).
Ans 8.
In the Operating System, a shell is a command-line interface (CLI) or a graphical user interface
(GUI) that provides a way for users to interact with an operating system. It acts as an
intermediary between the user and the operating system, allowing users to execute commands,
run programs, manage files, and perform various system-related tasks.
The shell interprets the commands entered by the user and translates them into instructions that
the operating system can understand and execute. It also provides features like input/output
redirection, piping, scripting, and the ability to customize the shell environment.
There are several different types of shells available, each with its own syntax, features, and
history. Here are some commonly used types of shells:
1. Bourne Shell (sh): The Bourne Shell was one of the first Unix shells and serves as the
foundation for many other shells. It has a simple and straightforward syntax and provides
basic command-line functionality.
2. C Shell (csh): The C Shell was developed as an enhanced version of the Bourne Shell. It
introduced features like command history, command-line editing, and a C-like syntax.
The C Shell is popular among developers and programmers due to its programming-
friendly features.
3. Bourne-Again Shell (bash): The Bash shell is one of the most widely used shells and is
the default shell for many Linux distributions. It is compatible with the Bourne Shell and
incorporates features from the C Shell and the Korn Shell. Bash provides an extensive set
of features, including command completion, command history, shell scripting
capabilities, and support for variables and functions.
4. Korn Shell (ksh): The Korn Shell was developed by David Korn as an enhancement to
the Bourne Shell. It combines features from the Bourne Shell and the C Shell while adding
its own improvements. Korn Shell provides advanced scripting capabilities, command-
line editing, and powerful programming constructs.
Q9. Explains the Multilevel Queue Scheduling and Multilevel Feedback Queue Scheduling with
examples.

Ans 9.

Multilevel Queue Scheduling: Multilevel queue scheduling is a scheduling algorithm that


categorizes processes into multiple queues based on different criteria, such as process priority,
process type, or resource requirements. Each queue has its own scheduling algorithm and
priority level, allowing processes to be scheduled according to their specific characteristics.
The queues are usually arranged in a hierarchical manner, with each queue having a different
priority.

Here's an example to illustrate multilevel queue scheduling:

Suppose we have three queues: High Priority Queue, Medium Priority Queue, and Low Priority
Queue. Each queue is assigned a different priority level, with High Priority Queue having the
highest priority.

• High Priority Queue: Contains processes that require immediate execution, such as real-
time tasks or critical system processes.

• Medium Priority Queue: Contains processes that are important but not as time-sensitive
as those in the High Priority Queue.

• Low Priority Queue: Contains processes with low priority, such as background tasks or
batch processing jobs.

The scheduler selects a process from the highest priority queue first. If there are multiple
processes in the same queue, a suitable scheduling algorithm, such as Round Robin or First-
Come, First-Served, can be applied within that queue. Once the highest priority queue becomes
empty, the scheduler moves to the next lower priority queue and repeats the process.
The multilevel queue scheduling algorithm allows for different scheduling policies and
priorities for different types of processes, ensuring that critical tasks are executed promptly
while providing fair allocation of resources to other processes.

Multilevel Feedback Queue Scheduling: Multilevel feedback queue scheduling is an extension


of the multilevel queue scheduling algorithm. It allows processes to move between different
queues dynamically based on their behavior and resource usage. This algorithm provides a
more flexible approach by adjusting the priority of processes based on their past behavior.

Here's an example to illustrate multilevel feedback queue scheduling:

Suppose we have three queues: High Priority Queue, Medium Priority Queue, and Low Priority
Queue, similar to the previous example. However, in multilevel feedback queue scheduling, a
process can move between queues based on certain criteria.

Initially, all processes are assigned to the Low Priority Queue. The scheduler selects a process
from the highest priority queue. If a process uses a significant amount of CPU time in the
current time quantum (e.g., exceeds a certain threshold), it can be demoted to a lower priority
queue. Conversely, if a process exhibits good behavior (e.g., uses a small amount of CPU time),
it can be promoted to a higher priority queue.
The idea behind multilevel feedback queue scheduling is to allow long-running processes to
move to lower priority queues to give a chance to shorter and interactive processes in higher
priority queues. This ensures fairness and responsiveness in scheduling while adapting to the
behavior of different processes dynamically.

Overall, multilevel feedback queue scheduling provides a flexible approach that allows
processes to adjust their priority dynamically based on their resource usage, ensuring efficient
resource utilization and responsiveness in scheduling.

Q10. The following processes are being scheduled by using First Come First Serve (FCFS),
Shortest Job First (SJF), Shortest Remaining Time First (SRTF), Longest Job First (LJF),
Longest Remaining Time First (LRTF), Round Robin (RR), Priority, Highest Response Time
Next (HRRN) scheduling algorithms. Each process is assigned a numerical priority, with a
higher number indicating a higher relative priority. In addition to the processes listed below,
the system also has an idle task (which consumes no CPU resources and is identified as idle).
This task has priority 0 and is scheduled whenever the system has no other available processes
to run. The length of a time quantum is 10 units.
1. Show the scheduling order of the processes using a Gantt chart for each scheduling.

2. What is the turnaround time for each process?

3. What is the waiting time for each process?


4. What is the average waiting time?

5. What is the CPU utilization rate?


6. What is the CPU idleness rate?

7. What are the number of context switches?

Ans 10.

You might also like