Professional Documents
Culture Documents
OS Unit 6
OS Unit 6
OS Unit 6
Figure 5-1. (a) Fixed memory partitions with separate input queues for
each partition. (b) Fixed memory partitions with a single input queue
3. Explain multiprogramming with dynamic partition.
- This method also allows multiple processes to execute simultaneously.
- Here, memory is shared among operating system and various
simultaneously running processes.
- Memory is not divided into any fixed partitions. Also the number of
partitions is not fixed. Process is allocated exactly as much memory as it
requires.
- Initially, the entire available memory is treated as a single free partition.
- Whenever any process enters in a system, a chunk of free memory big
enough to fit the process is found and allocated. The remaining
unoccupied space is treated as another free partition.
- If enough free memory is not available to fit the process, process needs to
wait until required memory becomes available.
- Whenever any process gets terminate, it releases the space occupied.
If the released free space is contiguous to another free partition, both the
free partitions are merged together in to single free partition.
- Better utilization of memory than fixed sized size partition.
- This method suffers from External fragmentation.
-
Figure 5-2. Memory allocation changes as processes come into memory and
leave it. The shaded regions are unused memory.
-
- Here, 36 Kb is greater than 32 KB but less than 64 KB. Therefore a 64KB
partition is suitable for storing the process.
- Why buddy system:
- There is a limit on the number of active processes in static
partitioning. Space usage is also inefficient if there is a big difference
between partition size and process size.
- Dynamic partitioning is more complex to maintain because the size of
the partition changes when the partition is allocated to a new process.
- Buddy system falls somewhere between static and dynamic
partitioning. It supports limited but efficient splitting and combining
of memory blocks.
-
- Figure 5-3. (a) A part of memory with five processes and three holes. The
tick marks show the memory allocation units. The shaded regions (0 in the
bitmap) are free. (b) The corresponding bitmap. (c) The same information as
a list.
- Here are some advantages of bitmap memory management:
- Random allocation: Checking if a block is free only requires reading
the corresponding bit.
- Fast deletion: A bit can be flipped to "free" a block without
overwriting the data.
- Fast checking for large continuous sections: A word size of bits can be
checked from the bitmap in a single read.
- However, bitmap memory management also has some disadvantages,
including:
- Higher memory requirements: One bit is needed per block, which can
add up to 128MB for a 1TB disk with 1KB blocks.
- Slower operation: Searching the bitmap for a run of a given length is
slow.
-
- Fig:Four neighbor combinations for the terminating process, X.
- When the processes and holes are kept on a list sorted by address, several
algorithms can be used to allocate memory for a newly created process (or
an existing process being swapped in from disk). We assume that the
memory manager knows how much memory to allocate.
3. Worst Fit: Allocate the process to the partition which is the largest sufficient
among the freely available partitions available in the main memory. It is opposite
to the best-fit algorithm. It searches the entire list of holes to find the largest hole
and allocate it to process. Simulation has shown that worst fit is not a very good
idea either
4. Next Fit: Works the same as first fit, but maintains a pointer to all last allocated
memory space to the process. If a new request arrives, it starts its search from
there, unlike first fit which starts from the initial memory space.
**Physical Addresses:**
1. **Actual Memory Locations**: Physical addresses represent the actual locations
of data in the computer's physical memory (RAM). They correspond directly to
the physical memory chips or storage devices where data is stored.
2. **Unique and Fixed**: Unlike logical addresses, physical addresses are unique
and fixed. Each physical address corresponds to a specific location in physical
memory, reflecting the actual layout of memory chips in the system.
3. **Used by Memory Management Unit (MMU)**: Physical addresses are used
by the Memory Management Unit (MMU) to access memory hardware and
retrieve data for the CPU. The MMU translates logical addresses into physical
addresses during memory access.
4. **Determined by Hardware**: The physical memory layout is determined by
the hardware configuration of the system, including the number and
arrangement of memory chips.
5. **Directly Accesses Memory**: Physical addresses provide direct access to the
underlying memory hardware. They are used by the MMU to fetch data from
physical memory locations.
6. **Subject to Hardware Constraints**: Physical addresses are subject to
hardware constraints such as the maximum addressable memory size supported
by the system architecture.
7. **Not Affected by Address Translation**: Physical addresses are not affected
by address translation mechanisms implemented by the operating system. They
represent the actual locations of data in physical memory, without abstraction or
translation.
8. **Used for Low-Level Memory Operations**: Physical addresses are used for
low-level memory operations such as memory allocation, deallocation, and
direct memory access (DMA). They provide the means to access and
manipulate data at the hardware level.