Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Overlays

To enable a process to be larger than the amount of memory allocated to it, we can use overlays.
The idea of overlays is to keep in memory only those instructions and data that are needed at any
given time. When other instructions are needed, they are loaded into space occupied previously by
instruction that are no longer needed.

3 Points about Overlays


• Allow a process to be larger than the amount of memory allocated to it.
• Keep in memory only those instructions and data that are needed at any given time.
• When other data/instructions are needed, they are loaded into the space occupied previously
by instructions and/or data that are no longer needed.
Concept of overlays with the example of a two-pass compiler has been illustrated with the
following specifications:s
• 2-pass assembler/ Compiler
• Available main memory: 150k
• Code Size: 200k
- Pass1………………….70k
- Pass2………………….80k
- Common Routines……30k
- Symbol Table…………20k
Common routines, symbol table , overlay driver, and pass1 code are loaded into the main memory
for he program execution to start. When Pass1 has finished its work, Pass2 code is loaded on top of
the pass1 code (because this code is not needed anymore). This way, we can execute a 200k process
in a 150k memory. The diagram below shows this pictorially.
The above figure contains root part containing main with a size of 50K bytes and 2 overlays named
search and update, with sizes 35K bytes and 30K bytes respectively. The total size of program is
115K bytes; however its memory requirement is only 85K bytes. Overlays “search” and “update”
occupy the same memory area; hence only one of them can be in memory at any time. Thus, the
program should be able to execute when parts main and search or parts main and update are in
memory. How can the program execute without having the part “update” in memory? It can
executes if the functions performed by parts “search” and “update” are independent of one another.
The program will need to perform both searches and updates during its execution. Hence when
updates are to be performed, the program takes help from the OS to load overlay “update”. If a
search is to be performed after an update, overlay “update” would have to be loaded back into
memory. Thus, some loading actions have to take place during execution of the program. These
actions would slow down the program’s execution.
Contiguous Memory Allocation
The main memory must accommodate both the operating system and the user processes which
requires the allocation of main memory in the efficient manner.
The memory is usually divided into two partitions:
- One for the resident operating system
- One for the user processes
Operating system generally resides in the low memory because this decision depends on the
interrupt vector. Since, interrupt vector is often in low memory, programmers usually place the
operating system in low memory as well.
When several processes need to reside in the main memory at the same time, we therefore consider
how to allocate the available memory to the processes that are in the input queue waiting to be
brought into memory.
Therefore, Contiguous memory allocation, each process is contained in a single contiguous section
of memory.
Memory mapping and Protection
When several processes want to reside in the main memory so they must be protected from
modifying each other’s data. This can be incorporated by introducing the relocation register. As
show in the below figure.

HARDWARE SUPPORT FOR RELOCATION AND LIMIT REGISTERS

The relocation register contains the value of the smallest physical addresses; the limit register
contains the range of logical addresses. With relocation and limit registers, each logical address
must be less than the limit register; the MMU maps the logical address dynamically by adding the
value in the relocation register which is sent to memory.
When CPU selects a process for execution, the dispatcher loads the relocation and limit registers
with the correct values as part of the context switch.
Memory Allocation
Here are the methods for allocating memory
• One of the simplest ways for allocating memory is to divide memory into several fixed-
sized partitions. Each partition may contain exactly one process. Thus the degree of
multiprogramming is bound by the number of partitions. In this method when a partition is
free, a process is selected from the input queue and is loaded into the free partition. When the
process terminates, the partition become available for another process.
• In the variable-partition scheme, the operating system keeps a table indicating which parts
of memory are available and which are occupied. Initially, all memory is available for user
processes, and is considered one large block of available memory, a hole.
• An alternate approach is to keep a list of unused ( free ) memory blocks ( holes ), and to find
a hole of a suitable size whenever a process needs to be loaded into memory. There are many
different strategies for finding the "best" allocation of memory to processes, including the
three most commonly discussed:
1. First fit - Search the list of holes until one is found that is big enough to satisfy the
request, and assign a portion of that hole to that process. Whatever fraction of the
hole not needed by the request is left on the free list as a smaller hole. Subsequent
requests may start looking either from the beginning of the list or from the point at
which this search ended.
2. Best fit - Allocate the smallest hole that is big enough to satisfy the request. This
saves large holes for other process requests that may need them later, but the
resulting unused portions of holes may be too small to be of any use, and will
therefore be wasted. Keeping the free list sorted can speed up the process of finding
the right hole.
3. Worst fit - Allocate the largest hole available, thereby increasing the likelihood that
the remaining portion will be usable for satisfying future requests.
• Simulations show that either first or best fit are better than worst fit in terms of both time and
storage utilization. First and best fits are about equal in terms of storage utilisation, but first
fit is faster.

You might also like