Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 22

1.Define system call and explain its type with examples.

A system call is a method for a computer program to request a service from the kernel of
the operating system on which it is running. A system call is a method of interacting with the
operating system via programs. A system call is a request from computer software to an
operating system's kernel.

There are commonly five types of system calls. These are as follows:

1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication

Process Control
Process control is the system call that is used to direct the processes. Some process
control examples include creating, load, abort, end, execute, process, terminate the
process, etc.

File Management
File management is a system call that is used to handle the files. Some file
management examples include creating files, delete files, open, close, read, write, etc.
Device Management
Device management is a system call that is used to deal with devices. Some examples
of device management include read, device, write, get device attributes, release
device, etc.

Information Maintenance
Information maintenance is a system call that is used to maintain information. There
are some examples of information maintenance, including getting system data, set
time or date, get time or date, set system data, etc.

Communication
Communication is a system call that is used for communication. There are some
examples of communication, including create, delete communication connections,
send, receive messages, etc.

Summarize the essential properties of the following types of operating systems (i)
Batch (ii) Distributed (iii) Real-Time

Batch Operating System


In the 1970s, Batch processing was very popular. In this technique, similar types of
jobs were batched together and executed in time. People were used to having a
single computer which was called a mainframe.

In Batch operating system, access is given to more than one person; they submit
their respective jobs to the system for the execution.

The system put all of the jobs in a queue on the basis of first come first serve and
then executes the jobs one by one. The users collect their respective output when all
the jobs get executed.
The purpose of this operating system was mainly to transfer control from one job to
another as soon as the job was completed. It contained a small set of programs
called the resident monitor that always resided in one part of the main memory. The
remaining part is used for servicing jobs.
Advantages of Batch OS

o The use of a resident monitor improves computer efficiency as it eliminates CPU time
between two jobs.

Disadvantages of Batch OS
1. Starvation

Distributed Operating System


The Distributed Operating system is not installed on a single machine, it is divided
into parts, and these parts are loaded on different machines. A part of the distributed
Operating system is installed on each machine to make their communication
possible. Distributed Operating systems are much more complex, large, and
sophisticated than Network operating systems because they also have to take care of
varying networking protocols.

Advantages of Distributed Operating System

o The distributed operating system provides sharing of resources.


o This type of system is fault-tolerant.

Disadvantages of Distributed Operating System


o Protocol overhead can dominate computation cost.

(iii) Real-Time Operating System:

 Real-time operating systems are designed to handle time-sensitive tasks and


provide predictable and deterministic behavior.
 They are used in applications where strict timing constraints must be met,
such as aerospace, industrial control systems, and medical devices.
 Real-time systems can be categorized into hard real-time (strict deadlines)
and soft real-time (missed deadlines are acceptable but degrade
performance).
 They typically have specialized scheduling algorithms and mechanisms to
ensure critical tasks are executed on time.
 Examples of real-time operating systems include VxWorks, QNX, and RTLinux.

Distinguish among Multi-Programming, Time Sharing and Multi-Processing OS.


Difference between Time Sharing and Multiprogramming :
S.No
. TIME SHARING MULTIPROGRAMMING
Time Sharing is the logical
extension of multiprogramming, in
this time sharing Operating system Multiprogramming operating system
many users/processes are allocated allows to execute multiple processes
with computer resources in by monitoring their process states and
01. respective time slots. switching in between processes.
Processor and memory underutilization
Processors time is shared with problem is resolved and multiple
multiple users that’s why it is called programs runs on CPU that’s why it is
02. as time sharing operating system. called multiprogramming.
In this process, two or more users In this, the process can be executed by
03. can use a processor in their terminal. a single processor.
Time sharing OS has fixed time Multi-programming OS has no fixed
04. slice. time slice.
In time sharing OS system, In multi-programming OS system
execution power is taken off before before finishing a task the execution
05. finishing of execution. power is not taken off.
Here the system works for the same Here the system does not take same
06. or less time on each processes. time to work on different processes.
In time sharing OS system depends In Multiprogramming OS, system
on time to switch between different depends on devices to switch between
07. processes. tasks such I/O interrupts etc.
System model of time sharing
system is multiple programs and System model of multiprogramming
08. multiple users. system is multiple programs.
Time sharing system minimizes Multiprogramming system maximizes
09. response time. processor use.
10. Example: Windows NT. Example: Mac OS.

Multiprogrammi Multitaskin Multithreadin Multiprocessi


Feature
ng g g ng

Running
Running Running
Running multiple
multiple multiple
multiple tasks
Definition threads within processes on
programs on a (application
a single task multiple CPUs
single CPU s) on a
(application) (or cores)
single CPU

Resources Resources Each process


Resources (CPU,
(CPU, (CPU, has its own set
Resource memory) are
memory) memory) are of resources
Sharing shared among
are shared shared among (CPU,
programs
among tasks threads memory)

Uses
Uses priority-
priority-
Uses round- based or time- Each process
based or
robin or priority- slicing can have its
time-slicing
Scheduling based scheduling scheduling to own
scheduling
to allocate CPU allocate CPU scheduling
to allocate
time to programs time to algorithm
CPU time to
threads
tasks

Each task
Each program Threads share Each process
Memory has its own
has its own memory space has its own
Management memory
memory space within a task memory space
space

Context Requires a Requires a Requires a Requires a


context
context switch context switch
context switch to switch to
to switch to switch
Switching switch between switch
between between
programs between
threads processes
tasks

Uses thread Uses inter-


Uses
synchronizati process
Uses message message
Inter-Process on communicatio
passing or passing or
Communicati mechanisms n mechanisms
shared memory shared
on (IPC) (e.g., locks, (e.g., pipes,
for IPC memory for
semaphores) sockets) for
IPC
for IPC IPC

SIMPLE STRUCTURE
It is the most straightforward operating system structure, but it lacks definition and is
only appropriate for usage with tiny and restricted systems. Since the interfaces and
degrees of functionality in this structure are clearly defined, programs are able to
access I/O routines, which may result in unauthorized access to I/O procedures.

This organizational structure is used by the MS-DOS operating system:

o There are four layers that make up the MS-DOS operating system, and each has its
own set of features.
o These layers include ROM BIOS device drivers, MS-DOS device drivers, application
programs, and system programs.
o The MS-DOS operating system benefits from layering because each level can be
defined independently and, when necessary, can interact with one another.
o If the system is built in layers, it will be simpler to design, manage, and update.
Because of this, simple structures can be used to build constrained systems that are
less complex.
o When a user program fails, the operating system as whole crashes.
o Because MS-DOS systems have a low level of abstraction, programs and I/O
procedures are visible to end users, giving them the potential for unwanted access.

The following figure illustrates layering in simple structure:


Advantages of Simple Structure:

o Because there are only a few interfaces and levels, it is simple to develop.
o Because there are fewer layers between the hardware and the applications, it offers
superior performance.

Disadvantages of Simple Structure:

o The entire operating system breaks if just one user program malfunctions.
o Since the layers are interconnected, and in communication with one another, there is
no abstraction or data hiding.
o The operating system's operations are accessible to layers, which can result in data
tampering and system failure.

SIMPLE STRUCTURE
It is the most straightforward operating system structure, but it lacks definition and is
only appropriate for usage with tiny and restricted systems. Since the interfaces and
degrees of functionality in this structure are clearly defined, programs are able to
access I/O routines, which may result in unauthorized access to I/O procedures.

This organizational structure is used by the MS-DOS operating system:

o There are four layers that make up the MS-DOS operating system, and each has its
own set of features.
o These layers include ROM BIOS device drivers, MS-DOS device drivers, application
programs, and system programs.
o The MS-DOS operating system benefits from layering because each level can be
defined independently and, when necessary, can interact with one another.
o If the system is built in layers, it will be simpler to design, manage, and update.
Because of this, simple structures can be used to build constrained systems that are
less complex.
o When a user program fails, the operating system as whole crashes.
o Because MS-DOS systems have a low level of abstraction, programs and I/O
procedures are visible to end users, giving them the potential for unwanted access.

The following figure illustrates layering in simple structure:


Advantages of Simple Structure:

o Because there are only a few interfaces and levels, it is simple to develop.
o Because there are fewer layers between the hardware and the applications, it offers
superior performance.

Disadvantages of Simple Structure:

o The entire operating system breaks if just one user program malfunctions.
o Since the layers are interconnected, and in communication with one another, there is
no abstraction or data hiding.
o The operating system's operations are accessible to layers, which can result in data
tampering and system failure.

SIMPLE STRUCTURE
It is the most straightforward operating system structure, but it lacks definition and is
only appropriate for usage with tiny and restricted systems. Since the interfaces and
degrees of functionality in this structure are clearly defined, programs are able to
access I/O routines, which may result in unauthorized access to I/O procedures.

This organizational structure is used by the MS-DOS operating system:

o There are four layers that make up the MS-DOS operating system, and each has its
own set of features.
o These layers include ROM BIOS device drivers, MS-DOS device drivers, application
programs, and system programs.
o The MS-DOS operating system benefits from layering because each level can be
defined independently and, when necessary, can interact with one another.
o If the system is built in layers, it will be simpler to design, manage, and update.
Because of this, simple structures can be used to build constrained systems that are
less complex.
o When a user program fails, the operating system as whole crashes.
o Because MS-DOS systems have a low level of abstraction, programs and I/O
procedures are visible to end users, giving them the potential for unwanted access.

The following figure illustrates layering in simple structure:

Explain about different tasks performed by OS.

An operating system (OS) is a software that manages computer hardware and


software resources and provides a platform for other software applications to run. It
performs various tasks to ensure efficient and reliable operation of a computer
system. Here are some key tasks performed by an operating system:

1. Process management: The OS manages and schedules processes (programs in


execution) to ensure fair and efficient utilization of the CPU (central processing unit).
It allocates system resources, such as memory and CPU time, to different processes,
and facilitates process synchronization and communication.

2. Memory management: The OS is responsible for managing system memory, which


includes allocating and deallocating memory to processes, maintaining memory
protection, and implementing virtual memory techniques to efficiently use physical
memory and provide a larger effective memory space.
3. File system management: The OS provides a file system that organizes and
manages files on storage devices such as hard drives. It handles file creation,
deletion, and manipulation, and ensures secure and reliable access to files. This
includes managing file permissions, file organization, and disk space allocation.

4. Device management: The OS controls and manages input/output devices, such as


keyboards, mice, printers, and network interfaces. It provides device drivers to
communicate with these devices, handles input/output requests, and manages
device allocation and scheduling.

5. User interface: The OS provides a user interface through which users interact with
the computer system. This can be a command-line interface (CLI) where users enter
commands, or a graphical user interface (GUI) with icons, windows, and menus. The
OS also handles input/output operations and manages user accounts and
permissions.

6. Network management: In networked environments, the OS facilitates network


communication by providing networking protocols, managing network connections,
and enabling the computer to access remote resources. It handles tasks such as IP
addressing, routing, and data transmission.

7. Security management: The OS implements security measures to protect the system


from unauthorized access, viruses, and malicious software. It provides user
authentication mechanisms, access controls, and encryption capabilities to ensure
data and system integrity.

8. Error handling and recovery: The OS detects and handles errors and exceptions
that may occur during system operation. It provides mechanisms for error reporting,
error handling, and system recovery to minimize the impact of failures and maintain
system stability.

These tasks represent a broad overview of the responsibilities typically handled by an


operating system. Different operating systems may have variations in the specific
implementations and features they provide, but the core functions remain consistent
across most operating systems.

Advantages of Simple Structure:

o Because there are only a few interfaces and levels, it is simple to develop.
o Because there are fewer layers between the hardware and the applications, it offers
superior performance.

Disadvantages of Simple Structure:

o The entire operating system breaks if just one user program malfunctions.
o Since the layers are interconnected, and in communication with one another, there is
no abstraction or data hiding.
o The operating system's operations are accessible to layers, which can result in data
tampering and system failure.

MODULE-3
What do you mean by Logical Address and Physical Address? How are these two involved
in Memory Management?

In computer systems, both logical addresses and physical addresses are used in memory
management.

A logical address, also known as a virtual address, is an address generated by the CPU (Central
Processing Unit) during program execution. It represents the location in the logical address space
of a process. The logical address space is typically larger than the physical address space and is
divided into smaller units called pages or segments. Logical addresses are used by the CPU and
the operating system to access memory and perform operations like reading or writing data.

On the other hand, a physical address refers to the actual location of data in the physical
memory (RAM). It represents the address at which the data is stored in the main memory.
Physical addresses are used by the memory management unit (MMU) to translate logical
addresses into physical addresses during the process of memory access.

Memory management involves the allocation and tracking of memory resources for processes in
a computer system. The translation of logical addresses to physical addresses is a crucial part of
memory management, and it is handled by the hardware and the operating system.

When a program references a logical address, the MMU performs the address translation by
mapping the logical address to the corresponding physical address. This translation is based on
the memory management technique employed, such as paging or segmentation. The MMU
maintains a mapping table or page table that keeps track of the logical-to-physical address
mappings.

The primary goal of memory management is to provide each process with a contiguous logical
address space, while efficiently utilizing the physical memory resources. By using logical
addresses, the operating system can isolate processes from one another, ensuring memory
protection and security. The translation from logical addresses to physical addresses allows the
operating system to load data into the physical memory as needed and manage memory
resources effectively.

What do you mean by MFT and MVT? Explain with an Example.


MFT (Multiprogramming with a Fixed number of Tasks) and MVT (Multiprogramming with a
Variable number of Tasks) are memory management techniques used in operating systems to
allocate and manage memory resources for multiple processes.

1. MFT (Multiprogramming with a Fixed number of Tasks):


MFT divides the available memory into fixed partitions of equal size, with each partition
allocated to a specific process. The number of partitions is predetermined and remains fixed.
Each process is assigned a fixed partition size regardless of its actual memory requirements. This
technique is commonly used in early systems where the memory was divided into fixed regions.

Example: Let's consider a system with a total memory size of 16KB and four processes: P1, P2,
P3, and P4. The memory is divided into four equal partitions of 4KB each (M1, M2, M3, M4).
Each process is allocated a fixed partition.

- Process P1 is allocated partition M1 (4KB).

- Process P2 is allocated partition M2 (4KB).

- Process P3 is allocated partition M3 (4KB).

- Process P4 is allocated partition M4 (4KB).

In this example, each process has a fixed partition size, and the memory is efficiently utilized.
However, if a process requires more memory than the allocated partition size, it cannot be
accommodated, resulting in internal fragmentation.

2. MVT (Multiprogramming with a Variable number of Tasks):

MVT allows for dynamic allocation of memory to processes based on their actual memory
requirements. The available memory is not divided into fixed partitions but is treated as a single
contiguous block. Each process is allocated memory as needed, and the size of the allocated
memory can vary over time.

Example: Consider a system with a total memory size of 16KB and three processes: P1, P2, and
P3. Initially, all three processes are loaded into memory, and they are assigned memory blocks
based on their requirements.

- Process P1 is allocated a memory block of 6KB.

- Process P2 is allocated a memory block of 4KB.

- Process P3 is allocated a memory block of 5KB.

As processes are loaded and unloaded, the memory blocks are dynamically allocated and
deallocated based on the memory requirements of the processes. This flexibility allows for
efficient memory utilization and reduces internal fragmentation compared to MFT.
In summary, MFT divides memory into fixed partitions, allocating one partition per process, while
MVT dynamically allocates memory to processes based on their needs. The choice between
these techniques depends on the specific requirements of the operating system and the
characteristics of the processes it needs to manage.

Write a short note on Memory Management in OS.


Memory management in an operating system involves the management and allocation of
memory resources to different processes and ensuring efficient utilization of available memory. It
plays a critical role in computer systems to enable smooth execution of programs and efficient
utilization of system resources. Here are some key points about memory management in an
operating system:

1. Memory Partitioning: Memory is typically divided into partitions to accommodate multiple


processes. There are two common partitioning schemes: fixed partitioning and variable
partitioning. Fixed partitioning divides memory into fixed-size partitions, while variable
partitioning allows for dynamic allocation of memory based on process requirements.

2. Memory Allocation: The operating system is responsible for allocating memory to processes. It
tracks the available memory and assigns memory blocks to processes as they are loaded into the
system. Different allocation strategies are used, such as first-fit, best-fit, or worst-fit, to find
suitable memory blocks for processes.

3. Address Translation: The operating system performs address translation between logical
addresses used by processes and physical addresses in the physical memory. This translation is
typically done by the Memory Management Unit (MMU) using techniques like paging or
segmentation. It allows processes to access and manipulate data in the physical memory using
logical addresses.

4. Memory Protection: Memory management ensures memory protection by isolating processes


from one another. Each process is allocated its own logical address space, and access to memory
outside of this space is restricted. Memory protection prevents one process from accessing or
modifying the memory assigned to another process, enhancing system stability and security.

5. Memory Deallocation: When a process completes or is terminated, the allocated memory


must be deallocated and made available for other processes. The operating system frees the
memory previously assigned to the terminated process, updating its memory allocation tables or
data structures to reflect the freed memory.

6. Memory Compaction: Over time, as processes are loaded and unloaded, memory
fragmentation can occur, leading to inefficient utilization of memory. Memory compaction
techniques are used to reduce fragmentation by rearranging memory contents, merging free
memory blocks, and creating larger contiguous memory regions.
7. Swapping and Paging: When the physical memory becomes insufficient to hold all active
processes, the operating system can employ techniques like swapping or paging. Swapping
involves moving entire processes in and out of secondary storage (disk) to free up memory.
Paging divides memory into fixed-size pages, allowing the operating system to swap individual
pages between memory and disk.

8. Virtual Memory: Virtual memory is a memory management technique that provides the
illusion of a larger address space than the physical memory. It allows processes to use more
memory than is physically available by using disk storage as an extension of main memory.
Virtual memory relies on demand paging, where only the required portions of a process are
loaded into memory.

Efficient memory management is crucial for the performance and stability of an operating
system. It ensures optimal utilization of memory resources, facilitates process execution, and
enables multitasking by managing the allocation, protection, and deallocation of memory for
processes.

Consider a swapping system in which memory consists of the following block


sizes in memory order: 10K, 4K, 20K, 18K, 7K, 9K, 12K and 15K. Which block will
be taken for successive segment request of 12K, 10K and 9K using First Fit, Best
Fit and Worst Fit. Show it in Diagram.
To illustrate the allocation of memory blocks using First Fit, Best Fit, and Worst Fit strategies for
successive segment requests of 12K, 10K, and 9K, let's go through each strategy step by step.

Given memory blocks in memory order: 10K, 4K, 20K, 18K, 7K, 9K, 12K, and 15K.

1. First Fit:

In First Fit, the first available memory block that is large enough to accommodate the
requested segment size is allocated.

Segment request: 12K

- First Fit selects the 20K memory block (satisfies the request).

- Remaining memory blocks: 10K, 4K, 18K, 7K, 9K, 12K, and 15K.

Segment request: 10K

- First Fit selects the 18K memory block (satisfies the request).
- Remaining memory blocks: 10K, 4K, 7K, 9K, 12K, and 15K.

Segment request: 9K

- First Fit selects the 10K memory block (satisfies the request).

- Remaining memory blocks: 4K, 7K, 9K, 12K, and 15K.

Diagram:

```

+---+

| 4K|

+---+

| 7K|

+---+

| 9K|

+---+

|12K|

+---+

|15K|

+---+

```

2. Best Fit:

In Best Fit, the memory block that provides the closest fit to the requested segment size is
allocated.

Segment request: 12K

- Best Fit selects the 15K memory block (provides the closest fit).

- Remaining memory blocks: 10K, 4K, 20K, 18K, 7K, 9K, and 12K.

Segment request: 10K

- Best Fit selects the 10K memory block (satisfies the request).
- Remaining memory blocks: 4K, 20K, 18K, 7K, 9K, and 12K.

Segment request: 9K

- Best Fit selects the 10K memory block (satisfies the request).

- Remaining memory blocks: 4K, 20K, 18K, 7K, 12K.

Diagram:

```

+---+

| 4K|

+---+

|20K|

+---+

|18K|

+---+

| 7K|

+---+

|12K|

+---+

```

3. Worst Fit:

In Worst Fit, the largest available memory block is allocated.

Segment request: 12K

- Worst Fit selects the 20K memory block (largest available).

- Remaining memory blocks: 10K, 4K, 18K, 7K, 9K, 12K, and 15K.

Segment request: 10K

- Worst Fit selects the 18K memory block (largest available).

- Remaining memory blocks: 10K, 4K, 7K, 9K, 12K, and 15K.
Segment request: 9K

- Worst Fit selects the 10K memory block (largest available).

- Remaining memory blocks: 4K,

7K, 9K, 12K, and 15K.

Diagram:

```

+---+

| 4K|

+---+

| 7K|

+---+

| 9K|

+---+

|12K|

+---+

|15K|

+---+

```

Note: The diagrams represent the state of memory after each segment allocation for the
respective allocation strategies. The blocks shown in the diagrams represent the allocated
memory blocks, and the remaining blocks are not shown for simplicity.

Explain Contiguous Memory Allocation Techniques using an Example.

Contiguous memory allocation techniques are memory management strategies that allocate
memory to processes in a contiguous manner, meaning that each process occupies a single
contiguous block of memory. Here, I'll explain two common contiguous memory allocation
techniques: Fixed Partitioning and Variable Partitioning, using an example.
1. Fixed Partitioning:

In fixed partitioning, the memory is divided into fixed-size partitions, and each partition is
assigned to a specific process. The number and size of partitions are predetermined.

Example: Let's consider a system with a total memory size of 32KB and four processes: P1, P2,
P3, and P4. The memory is divided into four fixed partitions of equal size, 8KB each.

- Process P1 is allocated partition M1 (8KB).

- Process P2 is allocated partition M2 (8KB).

- Process P3 is allocated partition M3 (8KB).

- Process P4 is allocated partition M4 (8KB).

In this example, each process has a fixed partition size, and the memory is divided equally
among the processes. However, if a process requires more memory than the allocated partition
size, it cannot be accommodated, resulting in internal fragmentation.

2. Variable Partitioning:

In variable partitioning, the memory is allocated dynamically to processes based on their actual
memory requirements. The memory is treated as a single contiguous block initially and is divided
and allocated as processes are loaded.

Example: Consider a system with a total memory size of 32KB and three processes: P1, P2, and
P3. Initially, all three processes are loaded into memory, and they are assigned memory blocks
based on their requirements.

- Process P1 is allocated a memory block of 12KB.

- Process P2 is allocated a memory block of 8KB.

- Process P3 is allocated a memory block of 10KB.

As processes are loaded and unloaded, the memory blocks are dynamically allocated and
deallocated based on the memory requirements of the processes. This flexibility allows for
efficient memory utilization and reduces internal fragmentation compared to fixed partitioning.

In summary, contiguous memory allocation techniques allocate memory to processes in


contiguous blocks. Fixed partitioning divides memory into fixed-size partitions, while variable
partitioning dynamically allocates memory based on process requirements. Fixed partitioning
offers simplicity but may lead to internal fragmentation, while variable partitioning provides
flexibility but requires more complex memory management algorithms to allocate and deallocate
memory blocks efficiently.

You might also like