Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Understanding Operating Systems, Seventh Edition 3-1

Chapter 3
Memory Management: Virtual Memory Systems
A Guide to this Instructor’s Manual:

We have designed this Instructor’s Manual to supplement and enhance your teaching
experience through classroom activities and a cohesive chapter summary.

This document is organized chronologically, using the same headings that you see in the
textbook. Under the headings you will find: lecture notes that summarize the section, Teacher
Tips, Classroom Activities, and Lab Activities. Pay special attention to teaching tips and
activities geared towards quizzing your students and enhancing their critical thinking skills.

In addition to this Instructor’s Manual, our Instructor’s Resources also contain PowerPoint
Presentations, Test Banks, and other supplements to aid in your teaching experience.

At a Glance

Instructor’s Manual Table of Contents


• Overview

• Objectives

• Teaching Tips

• Quick Quizzes

• Class Discussion Topics

• Additional Projects

• Additional Resources

• Key Terms
Understanding Operating Systems, Seventh Edition 3-2

Lecture Notes

Overview
In this chapter, students will follow the evolution of memory management with four new
memory allocation schemes. These remove the restriction of storing the programs
contiguously, and most of them eliminate the requirement that the entire program reside in
memory during its execution. Our concluding discussion of cache memory will demonstrate
how its use improves the performance of the Memory Manager.

Learning Objectives
After completing this chapter, the student should be able to describe:

• The basic functionality of the memory allocation methods covered in this chapter:
paged, demand paging, segmented, and segmented/demand paged memory allocation
• The influence that these page allocation methods have had on virtual memory
• The difference between a first-in first-out page replacement policy, a least-recently-
used page replacement policy, and a clock page replacement policy
• The mechanics of paging and how a memory allocation scheme determines which pages
should be swapped out of memory
• The concept of the working set and how it is used in memory allocation schemes
• Cache memory and its role in improving system response time

Teaching Tips
Paged Memory Allocation

1. Begin the discussion by introducing the terms page, page frame, and sector.

2. Note that before executing a program, a basic Memory Manager prepares it by:
 Determining the number of pages in the program
 Locating enough empty page frames in the main memory
 Loading all of the program’s pages into them

3. Discuss the primary advantage of storing programs in noncontiguous page frames.

4. Use Figure 3.1 and Tables 3.1 and 3.2 to discuss how the Memory Manager keeps track
of a program that is four pages long. Notice in Figure 3.1 that the last page frame (Page
Frame 11) is not fully utilized because Page 3 is less than 100 bytes.
Understanding Operating Systems, Seventh Edition 3-3

5. Figure 3.1 uses arrows and lines to show how a job’s pages fit into page frames in
memory, but the Memory Manager uses tables to keep track of them. Note the three
tables that perform this function: the Job Table, the Page Map Table, and the Memory
Map Table.

6. Introduce the terms Job Table and Page Map Table (PMT), as used in Table 3.1.

7. Introduce the term displacement. Use Figure 3.2 and the example presented on page 63
to aid the discussion.

8. Point out that the paged memory allocation algorithm needs to be expanded to find the
exact location of a byte in main memory. To do so, we need to correlate each of the
job’s pages with its page frame number using the job’s Page Map Table. Use the
example on pages 64 - 65 and Table 3.2 to discuss the steps involved.

9. Another example, on page 65, follows the hardware and the operating system as an
assembly language program executed with a LOAD instruction. Students should
understand how the system finds the exact location of Byte 518 (where the system will
find the value to load into Register 1). Use Figure 3.3 to aid the discussion.

10. Introduce the term address resolution.

11. Discuss the advantages and disadvantages of a paging scheme.

12. Point out that the key to the success of this scheme is the size of the page.

Teaching Remind the students that in the examples presented throughout this chapter, page
Tip numbering and page frame numbering begin with zero – not one! This is true
throughout much of computer science.

Demand Paging Memory Allocation


1. Demand paging introduced the concept of loading only a part of the program into
memory for processing. Note that it was the first widely used scheme that removed the
restriction of having the entire job in memory from the beginning to the end of its
processing.

2. Demand paging takes advantage of the fact that programs are written sequentially such
that while one section, or module, is processed, other modules may be idle. Point out
that not all the pages are accessed at the same time, or even sequentially. Use examples
to aid the discussion.

3. Note that the most important innovation of demand paging was that it made virtual
memory widely available.
Understanding Operating Systems, Seventh Edition 3-4

4. How and when the pages are passed between main memory and secondary storage
depends on predefined policies that determine when to make room for needed pages and
how to do so. Note that the operating system relies on tables (such as the Job Table, the
Page Map Tables, and the Memory Map Table) to implement the algorithm.

5. Point out that with demand paging, there are three new fields for each page in each
PMT: the memory field, the modified field, and the referenced field. Use Figure 3.5 to
aid the discussion.

6. Introduce the terms page fault handler, page swapping, and thrashing. Use Figure 3.6
to aid the discussion.

Page Replacement Policies and Concepts


1. Remind students that the policy that selects the page to be removed, the page
replacement policy, is crucial to the efficiency of the system, and the algorithm to do
that must be carefully selected.

2. Introduce the terms first-in first-out (FIFO) policy and least recently used (LRU)
policy.

First-In First-Out

1. Use Figure 3.7 to discuss the process of swapping pages.

2. Use Figure 3.8 to demonstrate how the FIFO algorithm works by following a job with
four pages (A, B, C, and D) as it is processed by a system with only two available page
frames.

3. Introduce the term page interrupt.

4. Make sure students understand how to calculate the failure rate.

Least Recently Used

1. The least recently used (LRU) page replacement policy swaps out the pages that show
the least recent activity, figuring that these pages are the least likely to be used again in
the immediate future. Illustrate how this works by following the same job in Figure 3.8
but using the LRU policy. The results are shown in Figure 3.9.

2. Introduce the term FIFO Anomaly (Belady Anomaly).

Clock Replacement Variation

1. Explain a variation of the LRU technique, known as the clock page replacement policy,
which is paced according to the computer’s clock cycle. Use Figure 3.10 to aid the
discussion.
Understanding Operating Systems, Seventh Edition 3-5

Bit Shifting Variation

1. A second variation of LRU uses an 8-bit reference byte and a bit-shifting technique to
track the usage of each page currently in memory. When the page is first copied into
memory, the leftmost bit of its reference byte is set to 1, and all bits to the right of the
one are set to zero. Use Figure 3.11 to aid the discussion.

Quick Quiz 1
1. Which of the following is the correct way of calculating the starting address of the page
frame?
a. Multiply the page frame number by the page frame size.
b. Divide the page frame size by the page frame number.
c. Add the page frame number and the page frame size.
d. Multiply the page frame number by the displacement.
Answer: a

2. Which of the following are disadvantages of demand paged memory allocation?


(Choose all that apply.)
a. The overhead required to manage the tables
b. The time required to reference the tables
c. The limitations of job size based on memory size
d. The lack of multiprogramming ability
Answer: a and b

3. When there is an excessive amount of page swapping between main memory and
secondary storage, the operation becomes inefficient. This phenomenon is called ____.
Answer: thrashing

4. The ____ policy is based on the assumption that the best page to remove is the one that
has been in memory the longest.
Answer: first-in first-out (FIFO)

The Mechanics of Paging

1. Before the Memory Manager can determine which pages will be swapped out, it needs
specific information about each page in memory. This information is included in the
Page Map Tables. Use Figure 3.5 and Table 3.3 to aid the discussion.

2. Note that each Page Map Table must track each page’s status, modifications, and
references. It does so with three bits, each of which can be either 0 or 1: the status bit,
the referenced bit, and the modified bit. Use Table 3.4 and 3.5 to aid the discussion.

3. Point out that the FIFO algorithm uses only the modified and status bits when swapping
pages (because it does not matter to FIFO how recently they were referenced), but the
LRU looks at all three before deciding which pages to swap out.
Understanding Operating Systems, Seventh Edition 3-6

4. Discuss the order in which the LRU policy would choose to swap pages and how it
handles those that have been modified without being referenced.

The Working Set

1. Introduce the term working set.

2. Note that typically, a job’s working set changes as the job moves through the system:
one working set could be used to initialize the job, another could work through repeated
calculations, another might interact with output devices, and a final set could close the
job. Use Figure 3.12 to aid the discussion.

3. Introduce the term locality of reference.

4. Explain that it would be convenient if all of the pages in a job’s working set were
loaded into memory at one time to minimize the number of page faults and to speed up
processing, but that this is easier said than done. To do so, the system needs definitive
answers to some difficult questions: How many pages comprise the working set? What
is the maximum number of pages the operating system will allow for a working set?
Use Figure 3.13 to aid the discussion.

5. Discuss the advantages and disadvantages of demand paging.

Segmented Memory Allocation


1. Introduce the terms segmented memory allocation, segments, and subroutine.

2. Discuss the difference between a paging scheme and segmented memory allocation.

3. When a program is compiled or assembled, the segments are set up according to the
program’s structural modules. Each segment is numbered and a Segment Map Table
(SMT) is generated for each job; it contains the segment numbers, their lengths, access
rights, status, and (when each is loaded into memory) its location in memory. Use
Figures 3.14 and 3.15 to aid the discussion.

4. Make sure students understand how the Memory Manager keeps track of the segments
in memory. Use Figure 3.16 to aid the discussion.

5. Point out that the disadvantage of any allocation scheme in which memory is partitioned
dynamically is the return of external fragmentation. Therefore, if that schema is used,
recompaction of available memory is necessary from time to time.

Teaching Students should be aware that in this scheme, segments do not need to be stored
Tip contiguously. Point out that the addressing scheme is two-dimensional,
referencing the segment number and the displacement.
Understanding Operating Systems, Seventh Edition 3-7

Segmented/Demand Paged Memory Allocation


1. The segmented/demand paged memory allocation scheme evolved from the two we
have just discussed. It is a combination of segmentation and demand paging, and it
offers the logical benefits of segmentation, as well as the physical benefits of paging.

2. Use Figure 3.17 to illustrate this scheme. Note that it requires four tables:
 The Job Table
 The Segment Map Table
 The Page Map Table
 The Memory Map Table

3. Discuss the major disadvantages of this memory allocation scheme.

4. Introduce the term associative memory. Use Figure 3.18 to aid the discussion.

5. To appreciate the role of associative memory, it is important for students to understand


how the system works with segments and pages. Discuss the steps involved in a typical
procedure.

6. Discuss the advantages and disadvantages of a large associative memory.

Teaching Discuss the reasons why many of the problems found in segmentation are
Tip removed in this scheme.

Virtual Memory
1. Virtual memory became possible with the capability of moving pages at will between
main memory and secondary storage, and it effectively removed restrictions on
maximum program size.

2. Note that virtual memory can be implemented with both paging and segmentation, as
seen in Table 3.6.

3. Segmentation allows users to share program code. Point out that the shared segment
contains: (1) an area where unchangeable code (called reentrant code) is stored, and (2)
several data areas, one for each user.

4. The use of virtual memory requires cooperation between the Memory Manager (which
tracks each page or segment) and the processor hardware (which issues the interrupt and
resolves the virtual address). Use examples to aid the discussion.

5. Discuss the advantages and disadvantages of virtual memory management.


Understanding Operating Systems, Seventh Edition 3-8

Teaching Refer to the following Web site to learn more about virtual memory in the Linux
Tip Red Hat operating system: http://web.mit.edu/rhel-doc/4/RH-DOCS/rhel-isa-en-
4/s1-memory-concepts.html

Cache Memory
1. Cache memory is based on the concept of using a small, fast, and expensive memory to
supplement the workings of main memory. Because the cache is usually small in
capacity (compared to main memory), it can use more expensive memory chips. Note
that these are five to ten times faster than main memory and match the speed of the
CPU.

2. Use Figure 3.19 to compare the traditional path used by early computers between main
memory and the CPU and the path used by modern computers to connect the main
memory and the CPU via cache memory.

3. Note that a typical microprocessor has two or more levels of caches, such as Level 1
(L1), Level 2 (L2), and Level 3 (L3), as well as specialized caches.

4. Make sure students understand the relationship between main memory and cache
memory.

5. When designing cache memory, one must take into consideration the following four
factors:
 Cache size
 Block size
 Block replacement algorithm
 Rewrite policy

6. Introduce the terms cache hit ratio and average memory access time
(Avg_Mem_AccTime). Use examples to aid the discussion.

Teaching Refer to the following Web site to learn more about cache memory:
Tip www.informit.com/articles/article.asp?p=30422&seqNum=5

Quick Quiz 2
1. Which of the following memory management schemes solved internal fragmentation?
a. Paged memory allocation
b. Fixed partition
c. Segmented memory allocation
d. Parallel partition
Understanding Operating Systems, Seventh Edition 3-9

Answer: c

2. Which of the following concepts is best at preventing page faults?


a. Paging
b. The working set
c. Hit ratios
d. Address location resolution
Answer: b

3. ____ is a small high-speed memory unit that a processor can access more rapidly than
main memory.
Answer: Cache memory

4. ____ is the algorithm that is often chosen for block replacement.


Answer: Least recently used (LRU)

Class Discussion Topics


1. Discuss the pros and cons of the memory management schemes presented in this
chapter. If given an option, which scheme would you implement and why?

2. Discuss the difference between associative memory and cache memory and the steps
involved in using these types of memory.

Additional Projects
1. Choose an operating system and submit a report on approaches to manage its virtual
memory.

2. Working in teams of four, write pseudocode for the first-in first-out (FIFO) and least-
recently-used (LRU) page replacement policies.

Additional Resources
1. Virtual Memory: www.howstuffworks.com/virtual-memory.htm

2. RAM, Virtual Memory, Pagefile and all that stuff:


http://support.microsoft.com/kb/555223

3. Page replacement policy: http://www.slideshare.net/sashi799/page-replacement-


5025792

4. The LRU page replacement policy:


http://www.mathcs.emory.edu/~cheung/Courses/355/Syllabus/9-virtual-mem/LRU-
replace.html
Understanding Operating Systems, Seventh Edition 3-10

Key Terms
 address resolution: the process of changing the address of an instruction or data item
to the address in main memory at which it is to be loaded or relocated.
 associative memory: the name given to several registers, allocated to each active
process, whose contents associate several of the process segments and page numbers
with their main memory addresses.
 at the time it is needed for processing.
 cache memory: a small, fast memory used to hold selected data and to provide faster
access than would otherwise be possible.
 clock cycle: the elapsed time between two ticks of the computer’s system clock.
 clock page replacement policy: a variation of the LRU policy that removes from main
memory the pages that show the least amount of activity during recent clock cycles.
 demand paging: a memory allocation scheme that loads a program’s page into memory
 displacement: in a paged or segmented memory allocation environment, the difference
between a page’s relative address and the actual machine language address. Also called
offset.
 FIFO anomaly: an unusual circumstance through which adding more page frames
causes an increase in page interrupts when using a FIFO page replacement policy.
 first-in first-out (FIFO) policy: a page replacement policy that removes from main
memory the pages that were brought in first.
 Job Table (JT): a table in main memory that contains two values for each active job—
the size of the job and the memory location where its page map table is stored.
 least recently used (LRU) policy: a page-replacement policy that removes from main
memory the pages that show the least amount of recent activity.
 locality of reference: behavior observed in many executing programs in which memory
locations recently referenced, and those near them, are likely to be referenced in the
near future.
 Memory Map Table (MMT): a table in main memory that contains an entry for each
page frame that contains the location and free/busy status for each one.
 page fault handler: the part of the Memory Manager that determines if there are empty
page frames in memory so that the requested page can be immediately copied from
secondary storage, or determines which page must be swapped out if all page frames are
busy. Also known as a page interrupt handler.
 page fault: a type of hardware interrupt caused by a reference to a page not residing in
memory. The effect is to move a page out of main memory and into secondary storage
so another page can be moved into memory.
 page frame: an individual section of main memory of uniform size into which a single
page may be loaded without causing external fragmentation.
 Page Map Table (PMT): a table in main memory with the vital information for each
page including the page number and its corresponding page frame memory address.
 page replacement policy: an algorithm used by virtual memory systems to decide
which page or segment to remove from main memory when a page frame is needed and
memory is full.
 page swapping: the process of moving a page out of main memory and into secondary
storage so another page can be moved into memory in its place.
 page: a fixed-size section of a user’s job that corresponds in size to page frames in main
memory.
Understanding Operating Systems, Seventh Edition 3-11

 paged memory allocation: a memory allocation scheme based on the concept of


dividing a user’s job into sections of equal size to allow for noncontiguous program
storage during execution.
 reentrant code: code that can be used by two or more processes at the same time; each
shares the same copy of the executable code but has separate data areas.
 sector: a division in a magnetic disk’s track, sometimes called a “block.” The tracks are
divided into sectors during the formatting process.
 Segment Map Table (SMT): a table in main memory with the vital information for
each segment including the segment number and its corresponding memory address.
 segment: a variable-size section of a user’s job that contains a logical grouping of code.
 segmented memory allocation: a memory allocation scheme based on the concept of
dividing a user’s job into logical groupings of code to allow for noncontiguous program
storage during execution.
 segmented/demand paged memory allocation: a memory allocation scheme based on
the concept of dividing a user’s job into logical groupings of code and loading them into
memory as needed to minimize fragmentation.
 subroutine: also called a “subprogram,” a segment of a program that can perform a
specific function. Subroutines can reduce programming time when a specific function is
required at more than one point in a program.
 thrashing: a phenomenon in a virtual memory system where an excessive amount of
page swapping back and forth between main memory and secondary storage results in
higher overhead and little useful work.
 virtual memory: a technique that allows programs to be executed even though they are
not stored entirely in memory.
 working set: a collection of pages to be kept in main memory for each active process in
a virtual memory environment.

You might also like