OS Unit 6

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

1. What are the memory management requirements?

- Sharing of Memory: In a multi-process environment, memory management


facilitates efficient sharing of memory among processes, eliminating the
need for separate copies. This is advantageous for inter-process
communication.
- Memory Protection: Memory management ensures that protection
mechanisms are in place to prevent unauthorized processes from accessing
or modifying memory locations, thus maintaining system security.
- Mapping of Address/Relocation: During process execution, swapping may
occur between main memory and secondary storage, necessitating
relocation. The translation from logical addresses to physical addresses is
handled by the operating system and associated processes, with the help of a
relocation register.
- Logical Addressing: Memory units organize programs into modules, each
with its own permissions for modification. Logical addresses, generated by
the CPU during runtime, reference these modules. Protection mechanisms
may vary at different logical levels where modules reside.
- Physical Space: The system's memory is divided into main memory
(volatile) and secondary memory (nonvolatile). Main memory handles
currently executing programs with better performance, while secondary
memory stores data for longer periods but with lower performance.
Swapping between these memory types can be complex.
- Overlaying Approach: When memory resources are insufficient, the
overlaying approach allows multiple modules of a user program to share the
same memory space. However, this can be challenging in a
multiprogramming environment due to limited visibility of memory details
during program execution.
- Virtual Memory: In systems with virtual memory, the operating system must
manage the mapping of virtual addresses to physical addresses, as well as
handle paging or swapping between physical memory and secondary storage
(such as a hard disk).

2. Explain multiprogramming with fixed partition.


- Multiprogramming with Fixed partitions
- This method allows multiple processes to execute simultaneously.
- Here memory is divided into fixed sized partitions. Size can be equal or
unequal for different partitions.
- Generally unequal partitions are used for better utilizations.
- Each partition can accommodate exactly one process, means only single
process can be placed in one partition.
- The partition boundaries are not movable.
- Whenever any program needs to be loaded in memory, a free partition big
enough to hold the program is found. This partition will be allocated to that
program or process.
- If there is no free partition available of required size, then the process needs
to wait. Such process will be put in a queue.
- There are two ways to maintain queue
- Using multiple Input Queues.
- Using single Input Queue.
- The disadvantage of sorting the incoming jobs into separate queues becomes
apparent when the queue for a large partition is empty but the queue for a
small partition is full, as is the case for partitions 1 and 3 in given Fig.
- Here small jobs have to wait to get into memory, even though plenty of
memory is free. An alternative organization is to maintain a single queue as
in Fig. 5-1(b). Whenever a partition becomes free, the job closest to the front
of the queue that fits in it could be loaded into the empty partition and run.

Figure 5-1. (a) Fixed memory partitions with separate input queues for
each partition. (b) Fixed memory partitions with a single input queue
3. Explain multiprogramming with dynamic partition.
- This method also allows multiple processes to execute simultaneously.
- Here, memory is shared among operating system and various
simultaneously running processes.
- Memory is not divided into any fixed partitions. Also the number of
partitions is not fixed. Process is allocated exactly as much memory as it
requires.
- Initially, the entire available memory is treated as a single free partition.
- Whenever any process enters in a system, a chunk of free memory big
enough to fit the process is found and allocated. The remaining
unoccupied space is treated as another free partition.
- If enough free memory is not available to fit the process, process needs to
wait until required memory becomes available.
- Whenever any process gets terminate, it releases the space occupied.
If the released free space is contiguous to another free partition, both the
free partitions are merged together in to single free partition.
- Better utilization of memory than fixed sized size partition.
- This method suffers from External fragmentation.

-
Figure 5-2. Memory allocation changes as processes come into memory and
leave it. The shaded regions are unused memory.

4. Write short note on:


Relocation problem for multiprogramming with fixed partitions
- In multiprogramming with fixed partitions, the relocation problem refers to
the challenge of managing memory addresses when loading and executing
programs within fixed-sized memory partitions. Here's a short note on the
relocation problem:
- In multiprogramming with fixed partitions, memory is divided into
fixed-sized partitions, with each partition assigned to a specific program.
However, programs may not always fit perfectly within their allocated
partitions, leading to wasted memory space or insufficient memory for larger
programs.
- When loading a program into a partition, the starting memory address of the
program must be determined. This address needs to be adjusted if the
program cannot be loaded at its original starting address due to memory
fragmentation or other constraints.
- The relocation problem arises because programs may need to be loaded into
different memory partitions or locations over time, depending on their size
and the availability of memory partitions. This requires adjusting the
program's memory addresses to reflect its new location, which can be
complex and may involve updating memory references within the program
code and data.
- To address the relocation problem, operating systems employ techniques
such as dynamic address translation or base and limit registers. These
mechanisms allow programs to be loaded into different memory locations
while ensuring that memory references are correctly translated to reflect the
program's new location.
- Overall, the relocation problem highlights the need for efficient memory
management techniques in multiprogramming environments with fixed
partitions to optimize memory usage and accommodate the execution of
multiple programs concurrently.

5. Discuss in details memory management with buddy system.


- The Buddy System is a method used in computer operating systems to
manage memory effectively.
- It works by dividing memory into fixed-size blocks. When a process needs
memory, the system locates the smallest available block that fits the
requested memory size. This helps in efficient memory allocation and
management, optimizing system performance. To continue fulfilling the
request, one of the two companions will further divide into smaller pieces.
- Buddy technology divides blocks of different sizes when they are allocated
and then coalesces them when they are released to effectively partition a
large block into smaller blocks of different sizes as needed.
- The buddy system uses a binary tree to represent used and unused split
memory blocks. Each memory block has an order, which is a number
between 0 and a predetermined upper limit. The address of a block's buddy
is the bitwise exclusive OR (XOR) of the block's address and its size
- Example
- Assume we have a buddy system with a physical address space of 256 KB,
and we want to find the partition size for the 36 KB process.

-
- Here, 36 Kb is greater than 32 KB but less than 64 KB. Therefore a 64KB
partition is suitable for storing the process.
- Why buddy system:
- There is a limit on the number of active processes in static
partitioning. Space usage is also inefficient if there is a big difference
between partition size and process size.
- Dynamic partitioning is more complex to maintain because the size of
the partition changes when the partition is allocated to a new process.
- Buddy system falls somewhere between static and dynamic
partitioning. It supports limited but efficient splitting and combining
of memory blocks.

6. A 1MB block of memory is allocated using the buddy system.


a. Show the results of the following sequence in a figure: Request
70; Request 35; Request 80; Return A; Request 60; Return B;
Return D; Return C.

b. Show the binary tree representation following Return B.


1MB
/ \
512KB 512KB
/ \
35MB 60MB
-
7. Explain memory management with bit maps in detail.
- With bitmap, memory is divided into allocation units. perhaps as small as a
few words and perhaps as large as several kilobytes
- Corresponding to each allocation unit there is a bit in a bitmap.
- Bit is 0 if the unit is free.
- Bit is 1 if unit is occupied.
- The size of allocation unit is an important design issue, the smaller the size,
the larger the bitmap.
- A bitmap provides a simple way to keep track of memory words in a fixed
amount of memory because the size of the bitmap depends only on the size
of memory and the size of the allocation unit
- The main problem is that when it has been decided to bring a k unit process,
the memory manager must search the bitmap to find a run of k consecutive 0
bits in the map
- Searching a bitmap for a run of a given length is a slow operation.
- Figure 5-3 shows part of memory and the corresponding bitmap.

-
- Figure 5-3. (a) A part of memory with five processes and three holes. The
tick marks show the memory allocation units. The shaded regions (0 in the
bitmap) are free. (b) The corresponding bitmap. (c) The same information as
a list.
- Here are some advantages of bitmap memory management:
- Random allocation: Checking if a block is free only requires reading
the corresponding bit.
- Fast deletion: A bit can be flipped to "free" a block without
overwriting the data.
- Fast checking for large continuous sections: A word size of bits can be
checked from the bitmap in a single read.
- However, bitmap memory management also has some disadvantages,
including:
- Higher memory requirements: One bit is needed per block, which can
add up to 128MB for a 1TB disk with 1KB blocks.
- Slower operation: Searching the bitmap for a run of a given length is
slow.

8. Explain memory management with linked list in details.


- Another way to keep track of memory is to maintain a linked list of allocated
and free memory segments, where segment either contains a process or is an
empty hole between two processes.
- The memory of Fig. 5-3(a) is represented in Fig. 5-3(c) as a linked list of
segments.
- Each entry in the list specifies a hole (H) or process (P), the address at which
it starts the length and a pointer to the next entry.
- The segment list is kept sorted by address. Sorting this way has the
advantage that when a process terminates or is swapped out, updating the list
is straightforward.
- A terminating process normally has two neighbors (except when it is at the
very top or bottom of memory).
- These may be either processes or holes, leading to the four combinations of
Fig. 5-4.
- In Fig. 5-4(a) updating the list requires replacing a P by an H.
- In Fig. 5-4(b) and Fig. 5-4(c) two entries are merged into one, and the list
becomes one entry shorter.
- In Fig. 5-4(d), three entries are merged and two items are removed from the
list.

-
- Fig:Four neighbor combinations for the terminating process, X.
- When the processes and holes are kept on a list sorted by address, several
algorithms can be used to allocate memory for a newly created process (or
an existing process being swapped in from disk). We assume that the
memory manager knows how much memory to allocate.

9. What are the differences of internal and external memory Fragmentation?


10. Explain following allocation algorithm.
a. First fit
b. Best fit
c. Worst fit
d. Next fit
1. First Fit: In the first fit, the partition is allocated which is the first sufficient
block from the top of Main Memory. It scans memory from the beginning and
chooses the first available block that is large enough. Thus it allocates the first hole
that is large enough.
The simplest algorithm is first fit. First fit is a fast algorithm because it searches as
little as possible.
2. Best Fit: Allocate the process to the partition which is the first smallest sufficient
partition among the free available partition. It searches the entire list of holes to
find the smallest hole whose size is greater than or equal to the size of the process.
Rather than breaking up a big hole that might be needed later, best fit tries to find a
hole that is close to the actual size needed

3. Worst Fit: Allocate the process to the partition which is the largest sufficient
among the freely available partitions available in the main memory. It is opposite
to the best-fit algorithm. It searches the entire list of holes to find the largest hole
and allocate it to process. Simulation has shown that worst fit is not a very good
idea either

4. Next Fit: Works the same as first fit, but maintains a pointer to all last allocated
memory space to the process. If a new request arrives, it starts its search from
there, unlike first fit which starts from the initial memory space.

11. Explain the difference between logical and physical addresses?


**Logical Addresses:**

1. **Virtual Addresses**: Logical addresses, also known as virtual addresses, are


generated by the CPU during program execution. They represent the memory
locations that a program accesses.
2. **Independent of Hardware**: Logical addresses are independent of the
underlying hardware. They provide a uniform and abstract view of memory for
each process, allowing processes to operate independently of the physical
memory layout.
3. **Translated by OS**: The operating system translates logical addresses into
physical addresses before accessing memory. This translation is necessary to
map the abstract logical addresses to the actual physical locations in memory.
4. **Contiguous and Zero-Based**: Logical addresses are typically contiguous and
start from zero for each process. Each process sees its own logical address space
as a continuous range of addresses, regardless of the actual physical memory
layout.
5. **Used for Memory Management**: Logical addresses play a crucial role in
memory management. They allow processes to reference memory without
needing to know the physical details of the memory layout, facilitating efficient
memory allocation and protection.
6. **May Not Reflect Physical Layout**: The logical address space may not
reflect the actual physical memory layout. Logical addresses provide a
convenient abstraction that hides the complexities of physical memory
organization from processes.
7. **Subject to Address Translation**: Since logical addresses need to be
translated into physical addresses before accessing memory, they are subject to
address translation mechanisms implemented by the operating system and
hardware.
8. **Used in Virtual Memory Systems**: Logical addresses are central to virtual
memory systems, where memory addresses are managed in a virtual address
space that is larger than the physical memory available on the system.

**Physical Addresses:**
1. **Actual Memory Locations**: Physical addresses represent the actual locations
of data in the computer's physical memory (RAM). They correspond directly to
the physical memory chips or storage devices where data is stored.

2. **Unique and Fixed**: Unlike logical addresses, physical addresses are unique
and fixed. Each physical address corresponds to a specific location in physical
memory, reflecting the actual layout of memory chips in the system.
3. **Used by Memory Management Unit (MMU)**: Physical addresses are used
by the Memory Management Unit (MMU) to access memory hardware and
retrieve data for the CPU. The MMU translates logical addresses into physical
addresses during memory access.
4. **Determined by Hardware**: The physical memory layout is determined by
the hardware configuration of the system, including the number and
arrangement of memory chips.
5. **Directly Accesses Memory**: Physical addresses provide direct access to the
underlying memory hardware. They are used by the MMU to fetch data from
physical memory locations.
6. **Subject to Hardware Constraints**: Physical addresses are subject to
hardware constraints such as the maximum addressable memory size supported
by the system architecture.
7. **Not Affected by Address Translation**: Physical addresses are not affected
by address translation mechanisms implemented by the operating system. They
represent the actual locations of data in physical memory, without abstraction or
translation.
8. **Used for Low-Level Memory Operations**: Physical addresses are used for
low-level memory operations such as memory allocation, deallocation, and
direct memory access (DMA). They provide the means to access and
manipulate data at the hardware level.

You might also like