Download as pdf or txt
Download as pdf or txt
You are on page 1of 149

Operating System

UNIT - 3
Roadmap
• Background
• Swapping
• Basic requirements of Memory
Management.
• Memory Partitioning
• Basic blocks of memory management
—Paging
—Segmentation
— Segmentation with Paging
Background
• Program must be brought into memory and placed within a
process for it to be run.

• Input queue – collection of processes on the disk that are


waiting to be brought into memory to run the program.

• User programs go through several steps before being run.


Binding of Instructions and Data to Memory

Address binding of instructions and data to memory addresses can


happen at three different stages.

• Compile time: If memory location known


a priori, absolute code can be generated; must
recompile code if starting location changes.
• Load time: Must generate relocatable code
if memory location is not known at compile time.
• Execution time: Binding delayed until
run time if the process can be moved during its
execution from one memory segment to another.
Need hardware support for address maps (e.g., base
and limit registers).
Multistep Processing of a User Program
From Source to Executable
Machine
memory
source object
program modules
foo.c foo.o other
programs
main() main load ...
sub1() sub1 module ?
Compiler data a.out
data main
(system
main sub1
calls)
sub1 data
static (Run Time)
data printf
library printf Loader exit
libc.a exit data
printf data
Linkage
scanf other
Editor
gets programs
fopen ...
exit
data “Load time”
... kernel

Dynamic library case not shown


Logical vs. Physical Address Space
• The concept of a logical address space that is bound to a
separate physical address space is central to proper memory
management.
— Logical address – generated by the CPU; also referred to
as virtual address.
— Physical address – address seen by the memory unit.

• Logical and physical addresses are the same in


compile-time and load-time address-binding
schemes; logical (virtual) and physical addresses
differ in execution-time address-binding scheme.
Memory-Management Unit (MMU)
• Hardware device that maps virtual to physical
address.

• In MMU scheme, the value in the relocation register


is added to every address generated by a user process
at the time it is sent to memory.

• The user program deals with logical addresses; it


never sees the real physical addresses.
• The MS-DOS os running on the Intel 80x86 family of
processors uses relocation registers when loading and running
processes.
Dynamic relocation using a relocation register
Dynamic Loading
• Routine is not loaded until it is called
• Better memory-space utilization; unused
routine is never loaded.
• Useful when large amounts of code are
needed to handle infrequently occurring
cases.
• No special support from the operating
system is required implemented through
program design.
Dynamic Linking
• Linking postponed until execution time.
• Small piece of code, stub, used to locate
the appropriate memory-resident library
routine.
• Stub replaces itself with the address of
the routine, and executes the routine.
• Operating system needed to check if
routine is in processes’ memory address.
• Dynamic linking is particularly useful for
libraries.
Memory Management
• Subdividing memory to accommodate
multiple processes
• Memory needs to be allocated to ensure a
reasonable supply of ready processes to
consume available processor time
Memory Management
• Uni-program
—Memory split into two
—One for Operating System (monitor)
—One for currently executing program
• Multi-program
—“User” part is sub-divided and shared among
active processes
Memory Allocation
Single-user Contiguous

Operating
System
1024

User Program

Unused
Multi-programming
Memory Partitioning

• Increase processor
utilization

• To take advantage of
multiprogramming, several
processes must reside in
memory
Memory Management Requirements

• Relocation
• Protection
• Sharing
• Logical organization
• Physical organization
Requirements: Relocation

• The programmer does not know where the


program will be placed in memory when it
is executed,
—it may be swapped to disk and return to main
memory at a different location (relocated)
• Memory references must be translated to
the actual physical memory address
Requirements: Protection

• Processes should not be able to reference


memory locations in another process
without permission
• Impossible to check absolute addresses at
compile time
• Must be checked at run time
Hardware Support for Relocation and Limit Registers
Requirements: Sharing

• Allow several processes to access the


same portion of memory
• Better to allow each process access to the
same copy of the program rather than
have their own separate copy
Requirements: Logical Organization

• Memory is organized linearly (usually)


• Programs are written in modules
—Modules can be written and compiled
independently
• Different degrees of protection given to
modules (read-only, execute-only)
• Share modules among processes
• Segmentation helps here
Requirements: Physical Organization

• Cannot leave the programmer with the


responsibility to manage memory
• Memory available for a program plus its
data may be insufficient
—Overlaying allows various modules to be
assigned the same region of memory but is
time consuming to program
• Programmer does not know how much
space will be available
Swapping
• Problem: I/O is so slow compared with
CPU that even in multi-programming
system, CPU can be idle most of the time
• Solutions:
—Increase main memory
– Expensive
– Leads to larger programs
—Swapping
What is Swapping?
• Long term queue of processes stored on
disk
• Processes “swapped” in as space becomes
available
• As a process completes it is moved out of
main memory
• If none of the processes in memory are
ready (i.e. all I/O blocked)
—Swap out a blocked process to intermediate
queue
—Swap in a ready process or a new process
—But swapping is an I/O process…
Schematic View of Swapping
Memory Partitioning

• An early method of managing memory


—Pre-virtual memory
—Not used much now
• But, it will clarify the later discussion of
virtual memory if we look first at
partitioning
—Virtual Memory has evolved from the
partitioning methods
Types of Partitioning

• Fixed Partitioning
• Dynamic Partitioning
• Simple Paging
• Simple Segmentation
• Virtual Memory Paging
• Virtual Memory Segmentation
Fixed Partitioning( IBM mainframe Os & OS/MFT
(Multiprogramming with fixed no of Tasks )

• Equal-size partitions (see fig)


—Any process whose size is less than
or equal to the partition size can be
loaded into an available partition
• The operating system can swap a
process out of a partition
—If none are in a ready or running
state
Fixed Partitioning Problems

• A program may not fit in a partition.


—The programmer must design the program
with overlays
• Main memory use is inefficient.
—Any program, no matter how small, occupies
an entire partition.
—This is results in internal fragmentation.
Overlays
• Keep in memory only those instructions
and data that are needed at any given
time.

• Needed when process is larger than


amount of memory allocated to it.

• Implemented by user, no special support


needed from operating system,
programming design of overlay structure
is complex
Overlays

Used when program didn’t fit


Operating
System
into existing memory space

Main or Control Output


logic Overlay

User Program Initialization


Overlay Processing
Overlay

Overlay Area

Unused
Solution – Unequal Sized Fixed Partitions

• Lessens both problems


— but doesn’t solve completely
• In Fig ,
—Programs up to 16M can be
accommodated without overlay
—Smaller programs can be placed in
smaller partitions, reducing internal
fragmentation
Fixed
Partitioning
Placement Algorithm

• Equal-size
—Placement is trivial (no options)
• Unequal-size
—Can assign each process to the smallest
partition within which it will fit
—Queue for each partition
—Processes are assigned in such a way as to
minimize wasted memory within a partition(
Internal fragmentation )
Fixed Partitioning
Remaining Problems with Fixed
Partitions

• The number of active processes is limited


by the system
—I.E limited by the pre-determined number of
partitions
• A large number of very small process will
not use the space efficiently
—In either fixed or variable length partition
methods
Dynamic Partitioning

• Partitions are of variable length and


number
• Process is allocated exactly as much
memory as required
• Example: IBM’s Mainframe operating
system, OS/MVT ( Multiprogramming with
a variable of Tasks)
Example: I
Effect of Dynamic Partitioning
Dynamic Partitioning Example

OS (8M)
• External Fragmentation
• Memory external to all
P2
P1
(14M)
processes is fragmented
(20M)
Empty (6M)
• Can resolve using
compaction
Empty
P4(8M)
P2
(56M)
(14M) —OS moves processes so that
Empty (6M)
they are contiguous
P3 —Time consuming and wastes
(18M) CPU time

Empty (4M)

Refer to Figure 7.4


Dynamic Partitioning

• Best-fit algorithm
—Chooses the block that is closest in size to the
request
—Worst performer overall
—Since smallest block is found for process, the
smallest amount of fragmentation is left
—Memory compaction must be done more often
Dynamic Partitioning

• First-fit algorithm
—Scans memory form the beginning and
chooses the first available block that is large
enough
—Fastest
—May have many process loaded in the front
end of memory that must be searched over
when trying to find a free block
Dynamic Partitioning

• Next-fit
—Scans memory from the location of the last
placement
—More often allocate a block of memory at the
end of memory where the largest block is
found
—The largest block of memory is broken up into
smaller blocks
—Compaction is required to obtain a large block
at the end of memory
Dynamic Partitioning

• Worst-fit
—Chooses the block that is largest in size to the
request
—Leaves the largest leftover hole
Allocation
Buddy System

• Entire space available is treated as a


single block of 2U
• If a request of size s where 2U-1 < s <= 2U
—entire block is allocated
• Otherwise block is split into two equal
buddies
—Process continues until smallest block greater
than or equal to s is generated
Example of Buddy System

1M

Process A request 100 K


512 K 512 K

256 K 256 K 512 K

128
AK 128 K 256 K 512 K
Example of Buddy System
Tree Representation of Buddy
System
Relocation

• When program loaded into memory the


actual (absolute) memory locations are
determined
• A process may occupy different partitions
which means different absolute memory
locations during execution
—Swapping
—Compaction
Addresses

• Logical
—Reference to a memory location independent
of the current assignment of data to memory.
• Relative
—Address expressed as a location relative to
some known point.
• Physical or Absolute
—The absolute address or actual location in main
memory.
Relocation
Registers Used during Execution

• Base register
—Starting address for the process
• Bounds register
—Ending location of the process
• These values are set when the process is
loaded or when the process is swapped in
Registers Used during Execution

• The value of the base register is added to


a relative address to produce an absolute
address
• The resulting address is compared with
the value in the bounds register
• If the address is not within bounds, an
interrupt is generated to the operating
system
Paging
• Partition memory into small equal fixed-
size chunks and divide each process into
the same size chunks
• The chunks of a process are called pages
and chunks of memory are called frames
• Operating system maintains a page table
for each process
—Contains the frame location for each page
in the process
—Memory address consist of a page number
and offset within the page
Paging

• Split programs (processes) into equal


sized small chunks - pages
• Allocate the required number page frames
to a process
• Operating System maintains list of free
frames
• A process does not require contiguous
page frames
• Use page table to keep track
Paging
• Logical address space of a process can be noncontiguous;
process is allocated physical memory whenever the
latter is available.
• Divide physical memory into fixed-sized blocks called
frames (size is power of 2, between 512 bytes and 8192
bytes).
• Divide logical memory into blocks of same size
called pages.
• Keep track of all free frames.
• To run a program of size n pages, need to find n
free frames and load program.
• Set up a page table to translate logical to
physical addresses.
• Internal fragmentation.
Assignment of Process Pages to
Free Frames
Assignment of Process Pages
to Free Frames
Page Tables for Example
Processes and Frames

A.0
0---
13
7
4 000
A.1
A.2 1---
14
8
5 111
A.3 2---
9
6 222
Free frame list
D.0
B.0 33
10
11
3
D.1
B.1
12 4
D.2
B.2
ProcessCAB
Process
Process
C.0 pagetable
page table
table
Process
page D
C.1 page table
C.2
C.3
D.3
D.4
Address Translation Scheme
• Address generated by CPU is divided into:
—Page number (p) – used as an index into a
page table which contains base address of
each page in physical memory.

— page number (p)= L / P (P=pagesize)


— offset (d)= L % P ( P=Pagesize)
Where L is logical address.
—Page offset (d) – combined with base address
to define the physical memory address that is
sent to the memory unit.
— Physical Address: (f-1)*P+d
Address Translation Architecture
Paging Example
Example: II
Logical and Physical Addresses - Paging
Paging Example
Free Frames

Before allocation After allocation


Allocation of Free Frames
Example Page Size
Implementation of Page Table
• Page table is kept in main memory.
• Page-table base register (PTBR) points to the page table.
• Page-table length register (PRLR) indicates size of the page
table.
• In this scheme every data/instruction access requires two
memory accesses. One for the page table and one for the
data/instruction.
• The two memory access problem can be solved by the use of
a special fast-lookup hardware cache called associative
memory or translation look-aside buffers (TLBs)
Associative Memory
• Associative memory – parallel search
Page # Frame #

Address translation (A´, A´´)


—If A´ is in associative register, get frame # out.
—Otherwise get frame # from page table in
memory
Paging Hardware With TLB
Translation Lookaside Buffer
TLB operation
TLB and
Cache Operation
Effective Access Time
• Associative Lookup = ε time unit
• Assume memory cycle time is 1
microsecond
• Hit ratio – percentage of times that a
page number is found in the associative
registers; ration related to number of
associative registers.
• Hit ratio = α
• Effective Access Time (EAT)
EAT = (1 + ε) α + (2 + ε)(1 – α)
=2+ε–α
Memory Protection
• Memory protection implemented by
associating protection bit with each frame.

• Valid-invalid bit attached to each entry in


the page table:
—“valid” indicates that the associated page is in
the process’ logical address space, and is thus
a legal page.
—“invalid” indicates that the page is not in the
process’ logical address space.
Valid (v) or Invalid (i) Bit In A Page Table
Drawback’s Of paging:
• Not considering logical structure of
Modular Programming.
• leads to internal fragmentation ( logical
address space is not an integer multiply of
page size).
• Max size of Internal Fragmentation (P-1)
where P is Page size.
Page Table Structure
• Hierarchical Paging

• Hashed Page Tables

• Inverted Page Tables


Hierarchical Page Tables
• Break up the logical address space into
multiple page tables.

• A simple technique is a two-level page


table.
Two-Level Paging Example

• A logical address (on 32-bit machine with


4K page size) is divided into:
— a page number consisting of 20 bits.
— a page offset consisting of 12 bits.
• Since the page table is paged, the page
number is further divided into:
— a 10-bit page number.
— a 10-bit page offset.
page number page offset
• Thus, a logical address is as follows:
pi p2 d

10 10 12
Two-Level Page-Table Scheme
Address-Translation Scheme
• Address-translation scheme for a two-
level 32-bit paging architecture
Hashed Page Tables
• Common in address spaces > 32 bits.

• The virtual page number is hashed into a


page table. This page table contains a
chain of elements hashing to the same
location.

• Virtual page numbers are compared in


this chain searching for a match. If a
match is found, the corresponding
physical frame is extracted.
Hashed Page Table
Inverted Page Table
• One entry for each real page of memory.
• Entry consists of the virtual address of the
page stored in that real memory location,
with information about the process that
owns that page.
• Decreases memory needed to store each
page table, but increases time needed to
search the table when a page reference
occurs.
• Use hash table to limit the search to one
— or at most a few — page-table entries.
Inverted Page Table Architecture
Shared Pages
• Shared code
—One copy of read-only (reentrant) code shared
among processes (i.e., text editors, compilers,
window systems).
—Shared code must appear in same location in
the logical address space of all processes.

• Private code and data


—Each process keeps a separate copy of the
code and data.
—The pages for the private code and data can
appear anywhere in the logical address space.
Shared Pages Example
Segmentation
• All segments of all programs do not have
to be of the same length
• There is a maximum segment length
• Addressing consist of two parts - a
segment number and an offset
• Since segments are not equal,
segmentation is similar to dynamic
partitioning
Segmentation
• Paging is not (usually) visible to the
programmer
• Segmentation is visible to the
programmer
• Usually different segments allocated to
program and data
• May be a number of program and data
segments
Advantages of Segmentation
• Simplifies handling of growing data
structures
• Allows programs to be altered and
recompiled independently, without re-
linking and re-loading
• Lends itself to sharing among processes
• Lends itself to protection
• Some systems combine segmentation
with paging
Segmentation Hardware
Example :
Example of Segmentation
Memory Management Terms

Table 7.1 Memory Management Terms

Term Description
Frame Fixed-length block of main
memory.

Page Fixed-length block of data in


secondary memory (e.g. on disk).

Segment Variable-length block of data


that resides in secondary
memory.
Segmentation with Paging – MULTICS
• The MULTICS system solved problems of
external fragmentation and lengthy
search times by paging the segments.

• Solution differs from pure segmentation in


that the segment-table entry contains not
the base address of the segment, but
rather the base address of a page table
for this segment.
MULTICS Address Translation Scheme
Virtual Memory
• Background
• Demand Paging
• Page Replacement
• Allocation of Frames
• Thrashing
• Operating System Examples
Terminology
Background
• Virtual memory – separation of user
logical memory from physical memory.
—Only part of the program needs to be in
memory for execution.
—Logical address space can therefore be much
larger than physical address space.
—Allows address spaces to be shared by several
processes.
—Allows for more efficient process creation.

• Virtual memory can be implemented via:


—Demand paging
—Demand segmentation
Virtual Memory That is Larger Than Physical
Memory
Demand Paging
• Bring a page into memory only when it is
needed.
—Less I/O needed
—Less memory needed
—Faster response
—More users

• Page is needed ⇒ reference to it


—invalid reference ⇒ abort
—not-in-memory ⇒ bring to memory
Transfer of a Paged Memory to Contiguous Disk
Space
Page Table When Some Pages Are Not in Main
Memory
Page Fault
• If there is ever a reference to a page, first
reference will trap to
OS ⇒ page fault
• OS looks at another table to decide:
—Invalid reference ⇒ abort.
—Just not in memory.
• Get empty frame.
• Swap page into frame.
• Reset tables, validation bit = 1.
• Restart instruction: Least Recently Used
Steps in Handling a Page Fault
Performance of Demand Paging
• Page Fault Rate 0 ≤ p ≤ 1.0
—if p = 0 no page faults
—if p = 1, every reference is a fault

• Effective Access Time (EAT)


EAT = (1 – p) x memory access
+ p (page fault overhead
+ [swap page out ]
+ swap page in
+ restart overhead)
Demand Paging Example
• Memory access time = 1 microsecond

• 50% of the time the page that is being


replaced has been modified and therefore
needs to be swapped out.

• Swap Page Time = 10 msec = 10,000


msec
EAT = (1 – p) x 1 + p (15000)
1 + 15000P (in msec)
What happens if there is no free frame?
• Page replacement – find some page in
memory, but not really in use, swap it
out.
—algorithm
—performance – want an algorithm which will
result in minimum number of page faults.

• Same page may be brought into memory


several times.
Page Replacement
• Prevent over-allocation of memory by
modifying page-fault service routine to
include page replacement.

• Use modify (dirty) bit to reduce overhead


of page transfers – only modified pages
are written to disk.

• Page replacement completes separation


between logical memory and physical
memory – large virtual memory can be
provided on a smaller physical memory.
Basic Page Replacement
1.Find the location of the desired page on
disk.

2.Find a free frame:


- If there is a free frame, use it.
- If there is no free frame, use a page
replacement algorithm to select a victim
frame.

3.Read the desired page into the (newly)


free frame. Update the page and frame
tables.
Page Replacement
Placement Policy

• Determines where in real memory a


process piece is to reside
• Important in a segmentation system
• Paging or combined paging with
segmentation hardware performs address
translation
Replacement Policy

• When all of the frames in main memory


are occupied and it is necessary to bring
in a new page, the replacement policy
determines which page currently in
memory is to be replaced.
But…

• Which page is replaced?


• Page removed should be the page least
likely to be referenced in the near future
—How is that determined?
—Principal of locality again
• Most policies predict the future behavior
on the basis of past behavior
Replacement Policy:
Frame Locking

• Frame Locking
—If frame is locked, it may not be replaced
—Kernel of the operating system
—Key control structures
—I/O buffers
—Associate a lock bit with each frame
Basic Replacement
Algorithms

• There are certain basic algorithms that are


used for the selection of a page to replace,
they include
—Optimal
—Least recently used (LRU)
—First-in-first-out (FIFO)
—Clock
• Examples
Page Replacement Algorithms
• Want lowest page-fault rate.
• Evaluate algorithm by running it on a
particular string of memory references
(reference string) and computing the
number of page faults on that string.
• In all our examples, the reference string is
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5.
Graph of Page Faults Versus The Number of
Frames
First-In-First-Out (FIFO) Algorithm
• Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
• 3 frames (3 pages can be in memory at a time
per process
1 1 4 5
• 4 frames
2 2 1 3 9 page faults
3 3 2 4

1 1 5 4
2 2 1 5 10 page faults
3 3 2
4 4 3
• FIFO Replacement – Belady’s Anomaly
—more frames ⇒ less page faults
FIFO Page Replacement
FIFO Illustrating Belady’s Anamoly
Optimal Algorithm
• Replace page that will not be used for
longest period of time.
• 4 frames example
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1 4
2 6 page faults
3
4 5
• How do you know this?
• Used for measuring how well your
algorithm performs.
Optimal Page Replacement
Least Recently Used (LRU) Algorithm
• Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2,
3, 4, 5 1 5
2
3 5 4
• 4 3

• Counter implementation
—Every page entry has a counter; every time
page is referenced through this entry, copy
the clock into the counter.
—When a page needs to be changed, look at the
counters to determine which are to change.
LRU Page Replacement
LRU Algorithm (Cont.)
• Stack implementation – keep a stack of
page numbers in a double link form:
—Page referenced:
– move it to the top
– requires 6 pointers to be changed
—No search for replacement
Examples II

• An example of the implementation of


these policies will use a page address
stream formed by executing the program
is
—2 3 2 1 5 2 4 5 3 2 5 2
• Which means that the first page
referenced is 2,
—the second page referenced is 3,
—And so on.
Optimal policy

• Selects for replacement that page for


which the time to the next reference is the
longest
• But Impossible to have perfect knowledge
of future events
Optimal Policy
Example

• The optimal policy produces three page


faults after the frame allocation has been
filled.
Least Recently
Used (LRU)

• Replaces the page that has not been


referenced for the longest time
• By the principle of locality, this should be
the page least likely to be referenced in
the near future
• Difficult to implement
—One approach is to tag each page with the
time of last reference.
—This requires a great deal of overhead.
LRU Example

• The LRU policy does nearly as well as the


optimal policy.
—In this example, there are four page faults
First-in, first-out (FIFO)

• Treats page frames allocated to a process


as a circular buffer
• Pages are removed in round-robin style
—Simplest replacement policy to implement
• Page that has been in memory the longest
is replaced
—But, these pages may be needed again very
soon if it hasn’t truly fallen out of use
FIFO Example

• The FIFO policy results in six page faults.


—Note that LRU recognizes that pages 2 and 5
are referenced more frequently than other
pages, whereas FIFO does not.
Clock Policy

• Uses and additional bit called a “use bit”


• When a page is first loaded in memory or
referenced, the use bit is set to 1
• When it is time to replace a page, the OS
scans the set flipping all 1’s to 0
• The first frame encountered with the use
bit already set to 0 is replaced.
Clock Policy Example

• Note that the clock policy is adept at


protecting frames 2 and 5 from
replacement.
Clock Policy

Insert page 727


Clock Policy
Clock Policy
Combined Examples
Comparison
Thrashing
• If a process does not have “enough”
pages, the page-fault rate is very high.
This leads to:
—low CPU utilization.
—operating system thinks that it needs to
increase the degree of multiprogramming.
—another process added to the system.

• Thrashing ≡ a process is busy swapping


pages in and out.
Thrashing

• Why does paging work?


Locality model
—Process migrates from one locality to
another.
—Localities may overlap.
• Why does thrashing occur?
Σ size of locality > total memory size
Required Reading

• Stallings, W. [2004] Operating Systems,


Pearson
• Loads of Web sites on Operating Systems
• Operating System Concepts BY
Silberschatz Galvin and Gagne sixth
Edition.
THANK YOU

You might also like