Professional Documents
Culture Documents
Os Answers
Os Answers
A system call is a method for a computer program to request a service from the kernel of
the operating system on which it is running. A system call is a method of interacting with the
operating system via programs. A system call is a request from computer software to an
operating system's kernel.
There are commonly five types of system calls. These are as follows:
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication
Process Control
Process control is the system call that is used to direct the processes. Some process
control examples include creating, load, abort, end, execute, process, terminate the
process, etc.
File Management
File management is a system call that is used to handle the files. Some file
management examples include creating files, delete files, open, close, read, write, etc.
Device Management
Device management is a system call that is used to deal with devices. Some examples
of device management include read, device, write, get device attributes, release
device, etc.
Information Maintenance
Information maintenance is a system call that is used to maintain information. There
are some examples of information maintenance, including getting system data, set
time or date, get time or date, set system data, etc.
Communication
Communication is a system call that is used for communication. There are some
examples of communication, including create, delete communication connections,
send, receive messages, etc.
Summarize the essential properties of the following types of operating systems (i)
Batch (ii) Distributed (iii) Real-Time
In Batch operating system, access is given to more than one person; they submit
their respective jobs to the system for the execution.
The system put all of the jobs in a queue on the basis of first come first serve and
then executes the jobs one by one. The users collect their respective output when all
the jobs get executed.
The purpose of this operating system was mainly to transfer control from one job to
another as soon as the job was completed. It contained a small set of programs
called the resident monitor that always resided in one part of the main memory. The
remaining part is used for servicing jobs.
Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates CPU time
between two jobs.
Disadvantages of Batch OS
1. Starvation
Running
Running Running
Running multiple
multiple multiple
multiple tasks
Definition threads within processes on
programs on a (application
a single task multiple CPUs
single CPU s) on a
(application) (or cores)
single CPU
Uses
Uses priority-
priority-
Uses round- based or time- Each process
based or
robin or priority- slicing can have its
time-slicing
Scheduling based scheduling scheduling to own
scheduling
to allocate CPU allocate CPU scheduling
to allocate
time to programs time to algorithm
CPU time to
threads
tasks
Each task
Each program Threads share Each process
Memory has its own
has its own memory space has its own
Management memory
memory space within a task memory space
space
SIMPLE STRUCTURE
It is the most straightforward operating system structure, but it lacks definition and is
only appropriate for usage with tiny and restricted systems. Since the interfaces and
degrees of functionality in this structure are clearly defined, programs are able to
access I/O routines, which may result in unauthorized access to I/O procedures.
o There are four layers that make up the MS-DOS operating system, and each has its
own set of features.
o These layers include ROM BIOS device drivers, MS-DOS device drivers, application
programs, and system programs.
o The MS-DOS operating system benefits from layering because each level can be
defined independently and, when necessary, can interact with one another.
o If the system is built in layers, it will be simpler to design, manage, and update.
Because of this, simple structures can be used to build constrained systems that are
less complex.
o When a user program fails, the operating system as whole crashes.
o Because MS-DOS systems have a low level of abstraction, programs and I/O
procedures are visible to end users, giving them the potential for unwanted access.
o Because there are only a few interfaces and levels, it is simple to develop.
o Because there are fewer layers between the hardware and the applications, it offers
superior performance.
o The entire operating system breaks if just one user program malfunctions.
o Since the layers are interconnected, and in communication with one another, there is
no abstraction or data hiding.
o The operating system's operations are accessible to layers, which can result in data
tampering and system failure.
SIMPLE STRUCTURE
It is the most straightforward operating system structure, but it lacks definition and is
only appropriate for usage with tiny and restricted systems. Since the interfaces and
degrees of functionality in this structure are clearly defined, programs are able to
access I/O routines, which may result in unauthorized access to I/O procedures.
o There are four layers that make up the MS-DOS operating system, and each has its
own set of features.
o These layers include ROM BIOS device drivers, MS-DOS device drivers, application
programs, and system programs.
o The MS-DOS operating system benefits from layering because each level can be
defined independently and, when necessary, can interact with one another.
o If the system is built in layers, it will be simpler to design, manage, and update.
Because of this, simple structures can be used to build constrained systems that are
less complex.
o When a user program fails, the operating system as whole crashes.
o Because MS-DOS systems have a low level of abstraction, programs and I/O
procedures are visible to end users, giving them the potential for unwanted access.
o Because there are only a few interfaces and levels, it is simple to develop.
o Because there are fewer layers between the hardware and the applications, it offers
superior performance.
o The entire operating system breaks if just one user program malfunctions.
o Since the layers are interconnected, and in communication with one another, there is
no abstraction or data hiding.
o The operating system's operations are accessible to layers, which can result in data
tampering and system failure.
SIMPLE STRUCTURE
It is the most straightforward operating system structure, but it lacks definition and is
only appropriate for usage with tiny and restricted systems. Since the interfaces and
degrees of functionality in this structure are clearly defined, programs are able to
access I/O routines, which may result in unauthorized access to I/O procedures.
o There are four layers that make up the MS-DOS operating system, and each has its
own set of features.
o These layers include ROM BIOS device drivers, MS-DOS device drivers, application
programs, and system programs.
o The MS-DOS operating system benefits from layering because each level can be
defined independently and, when necessary, can interact with one another.
o If the system is built in layers, it will be simpler to design, manage, and update.
Because of this, simple structures can be used to build constrained systems that are
less complex.
o When a user program fails, the operating system as whole crashes.
o Because MS-DOS systems have a low level of abstraction, programs and I/O
procedures are visible to end users, giving them the potential for unwanted access.
5. User interface: The OS provides a user interface through which users interact with
the computer system. This can be a command-line interface (CLI) where users enter
commands, or a graphical user interface (GUI) with icons, windows, and menus. The
OS also handles input/output operations and manages user accounts and
permissions.
8. Error handling and recovery: The OS detects and handles errors and exceptions
that may occur during system operation. It provides mechanisms for error reporting,
error handling, and system recovery to minimize the impact of failures and maintain
system stability.
o Because there are only a few interfaces and levels, it is simple to develop.
o Because there are fewer layers between the hardware and the applications, it offers
superior performance.
o The entire operating system breaks if just one user program malfunctions.
o Since the layers are interconnected, and in communication with one another, there is
no abstraction or data hiding.
o The operating system's operations are accessible to layers, which can result in data
tampering and system failure.
MODULE-3
What do you mean by Logical Address and Physical Address? How are these two involved
in Memory Management?
In computer systems, both logical addresses and physical addresses are used in memory
management.
A logical address, also known as a virtual address, is an address generated by the CPU (Central
Processing Unit) during program execution. It represents the location in the logical address space
of a process. The logical address space is typically larger than the physical address space and is
divided into smaller units called pages or segments. Logical addresses are used by the CPU and
the operating system to access memory and perform operations like reading or writing data.
On the other hand, a physical address refers to the actual location of data in the physical
memory (RAM). It represents the address at which the data is stored in the main memory.
Physical addresses are used by the memory management unit (MMU) to translate logical
addresses into physical addresses during the process of memory access.
Memory management involves the allocation and tracking of memory resources for processes in
a computer system. The translation of logical addresses to physical addresses is a crucial part of
memory management, and it is handled by the hardware and the operating system.
When a program references a logical address, the MMU performs the address translation by
mapping the logical address to the corresponding physical address. This translation is based on
the memory management technique employed, such as paging or segmentation. The MMU
maintains a mapping table or page table that keeps track of the logical-to-physical address
mappings.
The primary goal of memory management is to provide each process with a contiguous logical
address space, while efficiently utilizing the physical memory resources. By using logical
addresses, the operating system can isolate processes from one another, ensuring memory
protection and security. The translation from logical addresses to physical addresses allows the
operating system to load data into the physical memory as needed and manage memory
resources effectively.
Example: Let's consider a system with a total memory size of 16KB and four processes: P1, P2,
P3, and P4. The memory is divided into four equal partitions of 4KB each (M1, M2, M3, M4).
Each process is allocated a fixed partition.
In this example, each process has a fixed partition size, and the memory is efficiently utilized.
However, if a process requires more memory than the allocated partition size, it cannot be
accommodated, resulting in internal fragmentation.
MVT allows for dynamic allocation of memory to processes based on their actual memory
requirements. The available memory is not divided into fixed partitions but is treated as a single
contiguous block. Each process is allocated memory as needed, and the size of the allocated
memory can vary over time.
Example: Consider a system with a total memory size of 16KB and three processes: P1, P2, and
P3. Initially, all three processes are loaded into memory, and they are assigned memory blocks
based on their requirements.
As processes are loaded and unloaded, the memory blocks are dynamically allocated and
deallocated based on the memory requirements of the processes. This flexibility allows for
efficient memory utilization and reduces internal fragmentation compared to MFT.
In summary, MFT divides memory into fixed partitions, allocating one partition per process, while
MVT dynamically allocates memory to processes based on their needs. The choice between
these techniques depends on the specific requirements of the operating system and the
characteristics of the processes it needs to manage.
2. Memory Allocation: The operating system is responsible for allocating memory to processes. It
tracks the available memory and assigns memory blocks to processes as they are loaded into the
system. Different allocation strategies are used, such as first-fit, best-fit, or worst-fit, to find
suitable memory blocks for processes.
3. Address Translation: The operating system performs address translation between logical
addresses used by processes and physical addresses in the physical memory. This translation is
typically done by the Memory Management Unit (MMU) using techniques like paging or
segmentation. It allows processes to access and manipulate data in the physical memory using
logical addresses.
6. Memory Compaction: Over time, as processes are loaded and unloaded, memory
fragmentation can occur, leading to inefficient utilization of memory. Memory compaction
techniques are used to reduce fragmentation by rearranging memory contents, merging free
memory blocks, and creating larger contiguous memory regions.
7. Swapping and Paging: When the physical memory becomes insufficient to hold all active
processes, the operating system can employ techniques like swapping or paging. Swapping
involves moving entire processes in and out of secondary storage (disk) to free up memory.
Paging divides memory into fixed-size pages, allowing the operating system to swap individual
pages between memory and disk.
8. Virtual Memory: Virtual memory is a memory management technique that provides the
illusion of a larger address space than the physical memory. It allows processes to use more
memory than is physically available by using disk storage as an extension of main memory.
Virtual memory relies on demand paging, where only the required portions of a process are
loaded into memory.
Efficient memory management is crucial for the performance and stability of an operating
system. It ensures optimal utilization of memory resources, facilitates process execution, and
enables multitasking by managing the allocation, protection, and deallocation of memory for
processes.
Given memory blocks in memory order: 10K, 4K, 20K, 18K, 7K, 9K, 12K, and 15K.
1. First Fit:
In First Fit, the first available memory block that is large enough to accommodate the
requested segment size is allocated.
- First Fit selects the 20K memory block (satisfies the request).
- Remaining memory blocks: 10K, 4K, 18K, 7K, 9K, 12K, and 15K.
- First Fit selects the 18K memory block (satisfies the request).
- Remaining memory blocks: 10K, 4K, 7K, 9K, 12K, and 15K.
Segment request: 9K
- First Fit selects the 10K memory block (satisfies the request).
Diagram:
```
+---+
| 4K|
+---+
| 7K|
+---+
| 9K|
+---+
|12K|
+---+
|15K|
+---+
```
2. Best Fit:
In Best Fit, the memory block that provides the closest fit to the requested segment size is
allocated.
- Best Fit selects the 15K memory block (provides the closest fit).
- Remaining memory blocks: 10K, 4K, 20K, 18K, 7K, 9K, and 12K.
- Best Fit selects the 10K memory block (satisfies the request).
- Remaining memory blocks: 4K, 20K, 18K, 7K, 9K, and 12K.
Segment request: 9K
- Best Fit selects the 10K memory block (satisfies the request).
Diagram:
```
+---+
| 4K|
+---+
|20K|
+---+
|18K|
+---+
| 7K|
+---+
|12K|
+---+
```
3. Worst Fit:
- Remaining memory blocks: 10K, 4K, 18K, 7K, 9K, 12K, and 15K.
- Remaining memory blocks: 10K, 4K, 7K, 9K, 12K, and 15K.
Segment request: 9K
Diagram:
```
+---+
| 4K|
+---+
| 7K|
+---+
| 9K|
+---+
|12K|
+---+
|15K|
+---+
```
Note: The diagrams represent the state of memory after each segment allocation for the
respective allocation strategies. The blocks shown in the diagrams represent the allocated
memory blocks, and the remaining blocks are not shown for simplicity.
Contiguous memory allocation techniques are memory management strategies that allocate
memory to processes in a contiguous manner, meaning that each process occupies a single
contiguous block of memory. Here, I'll explain two common contiguous memory allocation
techniques: Fixed Partitioning and Variable Partitioning, using an example.
1. Fixed Partitioning:
In fixed partitioning, the memory is divided into fixed-size partitions, and each partition is
assigned to a specific process. The number and size of partitions are predetermined.
Example: Let's consider a system with a total memory size of 32KB and four processes: P1, P2,
P3, and P4. The memory is divided into four fixed partitions of equal size, 8KB each.
In this example, each process has a fixed partition size, and the memory is divided equally
among the processes. However, if a process requires more memory than the allocated partition
size, it cannot be accommodated, resulting in internal fragmentation.
2. Variable Partitioning:
In variable partitioning, the memory is allocated dynamically to processes based on their actual
memory requirements. The memory is treated as a single contiguous block initially and is divided
and allocated as processes are loaded.
Example: Consider a system with a total memory size of 32KB and three processes: P1, P2, and
P3. Initially, all three processes are loaded into memory, and they are assigned memory blocks
based on their requirements.
As processes are loaded and unloaded, the memory blocks are dynamically allocated and
deallocated based on the memory requirements of the processes. This flexibility allows for
efficient memory utilization and reduces internal fragmentation compared to fixed partitioning.