Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Overview of Cache Memory, Mapping Functions, Replacement Algorithms, Performance

Considerations, Overview of Virtual Memory, Virtual Memory Organisation, Address Translation.

The unit begins by discussing the concept of cache memory. Next, the unit discusses the mapping
function and replacement algorithms. Then, the unit discusses the performance considerations.
Further the unit discusses the concept of virtual memory. This unit also discusses the virtual memory
organisation. Towards the end, the unit discusses the address translation in virtual memory.

In this unit, you will learn to:


 Explain the concept of cache memory
 Discuss the mapping function and replacement algorithms
 Explain about concept of virtual memory
 Describe the concept of virtual memory organisation
 Discuss the address translation in virtual memory
At the end of this unit, you would:
 Evaluate the importance of cache memory
 Analyse the concept of mapping function and replacement algorithms
 Assess the importance of virtual memory
 Analyse the significance of virtual memory organisation
 Evaluate the use of address translation in virtual memory

 https://www.cs.umd.edu/~meesh/cmsc411/website/proj01/cache/cache.pdf

Cache memory is a type of memory that operates at extremely fast speeds. It’s used to boost performance
and synchronise with high-speed processors. Although cache memory is more expensive than main
memory or disc memory, it is less expensive than CPU registers. Cache memory is a form of memory
that works as a buffer between the RAM and the CPU and is highly fast. It stores frequently requested
data and instructions so that they may be accessed quickly by the CPU.

A computer’s CPU can generally execute instructions and data quicker than it can acquire them from
a low-cost main memory unit. As a result, the memory cycle time becomes the system’s bottleneck.
Using cache memory is one technique to minimise memory access time. This is a tiny, quick memory
that sits between the CPU and the bigger, slower main memory. This is where a programme’s presently
active segments and data are stored. Because address references are local, the CPU can usually find the
relevant information in the cache memory itself (cache hit) and only needs access to the main memory
infrequently (cache miss). With a large enough cache memory, cache hit rates of over 90% are possible,
resulting in a cost-effective increase in system performance.
The usage of cache memory reduces the average time it takes to access data from the main memory.
The cache is a more compact and quicker memory that stores copies of data from frequently accessed
main memory locations. In a CPU, there are several distinct, independent caches that store instructions
and data. Figure 1 shows the structure of cache memory:

Cache Memory

CPU Primary Memory Secondary Memory


This method splits the memory system into a number of memory modules and organises the addressing
so that consecutive words in the address space are assigned to distinct modules. Memory access requests
involving successive addresses will be sent to separate modules. The average pace of obtaining words
from the Main Memory can be improved since parallel access to these modules is available.

Various types of mapping functions that are used by cache memory are as follows:
 Direct Mapping  Associative Mapping  Set-associative Mapping

Direct mapping is the most basic method, which maps each block of main memory into only one potential
cache line. Assign each memory block to a specified line in the cache via direct mapping. If a memory
block filled a line before and a new block needs to be loaded, then old block gets deleted. There are two
elements to an address space: an index field and a tag field. The tag field is saved in the cache, while
the remaining keys are kept in the main memory. The Hit ratio is directly related to the performance of
direct mapping. The following formula is used to express this mapping:
i = j modulo m
where,
i = cache line number
j = main memory block number
m = number of lines in the cache
Each main memory address may be thought of as having three fields for the purposes of cache access.
Within a block of main memory, the least significant w bits indicate a unique word or byte. The address
in most modern computers is at the byte level. The remaining bits designate one of the main memory’s
2s blocks. These s bits are interpreted by the cache logic as a tag of s-r bits (most significant chunk)
and an r-bit line field. This last field indicates one of the cache’s m=2r lines. Figure 2 shows the direct
mapping of cache memory:

Memory address from


processor
Tag and index
Tag Index
Main
Main
memory
Cache
memory
accessed
Index
if tags do
Tag Data not match

Read

Compare
Different
Same

Access location
The associative memory is utilised to store the content and addresses of the memory word in this form
of mapping. Any block can be placed in any cache line. This implies that the word id bits are utilised
to determine which word in the block is required, but the tag becomes the remainder of the bits. This
allows any word to be placed wherever in the cache memory. It is said to be the quickest and most
adaptable mapping method.
Figure 3 shows the associative mapping of cache memory:

Memory address from


Main memory accessed
processor
if address not in cache

Cache Main
memory

Compare with
all stored Address Data
addresses
simultaneously

Address not
Address found found in cache

Access location

This type of mapping is an improved version of direct mapping that eliminates the disadvantages of
direct mapping. The concern of potential thrashing in the direct mapping approach is addressed by
set associative. It does this by stating that rather than having exactly one line in the cache to which a
block can map, we will combine a few lines together to form a set. Then a memory block can correspond
to any one of a set’s lines. Each word in the cache can have two or more words in the main memory at
the same index address, thanks to set-associative mapping. The benefits of both direct and associative
cache mapping techniques are combined in set associative cache mapping. The following formula is
used to express this mapping:
m=v*k
i = j mod v
where,
i = cache set number
j = main memory block number
v = number of sets
m = number of lines in the cache number of sets
k = number of lines in each set
Figure 4 shows the set-associative mapping of cache memory:

Memory address from


processor
Tag Index

Line Cache

Index

Tag Data Tag Data Tag Data Tag Data

Compare Main
memory
Same accessed
if tags do
not match
Access word

Page replacement is obvious in demand paging, and there are many page replacement algorithms.
Executing an algorithm on a particular memory reference for a process and calculating the number of
page faults assess an algorithm. The sequence of memory references is called reference string.
Some of the most widely used page replacement mechanisms are discussed as follows:
 Refers to the random-replacement policy in which the replaced page is
chosen at random. This is the memory manager randomly choosing any loaded page. Because
the policy calls for selecting the page to be replaced by choosing the page from any frame with
equal probability, it uses no knowledge of the reference stream (or the locality) when it selects the
page frame to replace. In general, random replacement does not perform well. On most reference
streams, it causes more missing page faults than the other algorithm discussed in this section. After
early exploration with random replacement, it has been recognised that several other policies would
produce fewer missing page faults.
 Refers to replacement algorithm that replaces the page that has been in the
memory the longest. FIFO emphasises on the interval of time of a page that has been present in the
memory instead of how much the page is being used. The advantage of FIFO is that it is simple to
implement.
A FIFO replacement algorithm is related to the time of each page when it was brought into the
memory. When there is a need for page replacement, the oldest page is chosen for the replacement.
A FIFO queue can be created that holds all pages brought in the memory. The page at the head of the
queue can be replaced. When a page is brought into the memory, it is inserted at the tail of the queue
of pages.
 Belady’s Refers to the replacement policy which has “perfect knowledge” of the
page reference string; thus, it always chooses an optimal page to be removed from the memory. Let
the forward distance of a page p at time t be the distance from the current point in the reference
stream to the next place in the stream where the same page is referenced again. In the optimal
algorithm, the replaced page y is one that has maximal forward distance:

Yt = Max FWDt(x)
Since more than one page is loaded at time t, there may be more than one page that never appears
again in the reference stream (x), that is, there may be more than one loaded page with maximal
forward distance (Yt). In this case, Belady’s optimal algorithm chooses an arbitrary loaded page
with maximal forward distance.
The optimal algorithm can only be implemented if the full page reference stream is known in advance.
Since it is rare for the system to have such knowledge, the algorithm is not practically realisable.
Instead, its theoretical behavior is used to compare the performance of realisable algorithms with
the optimal performance.
Although it is usually not possible to exactly predict the page reference stream, one can sometimes
predict the next page with a high probability that the prediction will be correct. For example, the
conditional branch instruction at the end of a loop almost always branches back to the beginning of
the loop rather than exiting it. Such predictions are based on static analysis.
Figure 5 shows Belady’s optimal algorithm behavior:

Frame 2 1 3 4 2 1 3 4 2 1 3 4 5 6 7 8

0 2* 2 2 2 2 2 2 2 2 1* 1 1 5* 5 5 8*

1 1* 1 1 1 1 3* 3 3 3 3 3 3 6* 6 6

2 3* 4* 4 4 4 4 4 4 4 4 4 4 7* 7

Belady’s
On the basis of source code or on the dynamic behavior of the programme, this analysis can produce
enough information to incorporate replacement “hints” in the source code. The compiler and paging
systems can then be designed to use these hints to predict the future behavior of the page reference
stream.
An example of Belady’s optimal algorithm with m=3 page frames is as follows:

Reference 3 4 3 2
2 1 2 1 4 1 3 4 5 6 7 8
Strings

Figure 5 has a row for each of the three page frames and a column for each reference in the page
stream. A table entry at row i, column j shows the page loaded at page frame i after rj has been
referenced. The optimal algorithm will behave as shown in Figure 5, which incurs 10 page faults. An
optimal page replacement algorithm has the minimum page fault rate among all algorithms.
An optimal algorithm never suffers from Belady’s anomaly. An optimal page replacement algorithm
resides and is called OPT or MIN. It states that replace the page which will not be accessed for the
longest period of time. By using this page replacement algorithm, the lowest possible page fault rate
for a fixed number of frames occurs.
The optimal page replacement algorithm is hard to implement because it needs preinformation of
the reference string. The optimal algorithm is mainly used for comparison studies.
 Refers to the algorithm that is designed to take advantage of “normal”
programme behavior. Programmes are developed that contain loops and cause the main line of the
code to execute repeatedly. In the code portion of the address space, the control unit will repeatedly
access the set of pages containing these loops. This set of pages containing the code is known as the
locality of the process. If the loop or loops that are executed are stored in a small number of pages,
then the programme has a small code locality.
 Refers to the algorithm that selects a page for replacement if the
page was not often used in the past. There may be more than one page that satisfies the criteria for
replacement, so many of the qualifying pages can be selected for replacement. Thus, an actively used
page should have larger reference count. If a programme changes the set of pages it is currently
using, the frequency counts will tend to cause the pages in the new locality to be replaced even
though they are currently being used. Another problem that lies with LFU is that it uses frequency
counts from the beginning of the page reference stream. The frequency counter is reset each time
a page is loaded rather than being allowed to monotonically increase throughout the execution of
the programme.
 Refers to the algorithm in which a resident page that has not been
accessed in the near past or recently is replaced. This algorithm keeps all resident pages in a circular
list. A referenced bit is set for each page whenever it is accessed. When the referenced bit is set as 1, it
means that the page is recently accessed, and if the referenced bit is set as 0, then the page has not
been referenced in the recent past.
 Resembles FIFO replacement algorithm. When a page has been
selected, its reference bit is checked. If it is 0, the page is replaced; else, if the reference bit is 1, the
page is given a second chance and the system moves on to select the next FIFO page. When a page
is brought second time, its reference bit is cleared and its arrival time is updated to the current
time. The page which is brought second time will not be replaced till all other pages are replaced
(or provided a second chance). If a page is used rapidly to keep its reference bit set, it will never be
replaced. This algorithm looks for an old page that has not been used in the previous clock intervals.
If the reference bit is set for all pages, second chance algorithm gets transformed into pure FIFO
algorithm.
 Assumes that the page with the smallest count was probably just
brought into the memory and has yet to be used and maybe referenced soon. However, neither MFU
nor LFU is very common, as the implementation of these algorithms is fairly expensive.
 Refer to another page replacement policy that can be implemented by using the
classes of page. For example, if the reference bit and the dirty bit for a page is considered together,
the pages can be classified into four classes:
1. (0, 0) neither referenced nor dirty
2. (0, 1) not referenced (recently) but dirty
3. (1, 0) referenced but clean
4. (1, 1) referenced and dirty
Thus, each page will belong to one of these four classes. The page in the lowest nonempty class will
be replaced. If there is more than one page in the lowest class, the page for replacement can be
chosen on FIFO basis or choose a page randomly among them.
 Refers to the algorithm which sets the lock table entry in the page to avoid the
swapping out of page from the memory. When demand paging is used, it is sometimes necessary
to allow some of its pages to be locked in the memory. Most important is the code that selects the
next page to be swapped in should never be swapped out, i.e., it could never execute itself to swap
itself back in. Device I/O operations may read or write data directly at the memory locations of the
process; then, the pages containing those memory locations do not get swapped out. Although the
process may not reference them presently, the I/O device is referencing them. Thus, setting a lock
table entry in the page table prevents a page from being swapped out.

When the processor wants to read or write data from main memory, it first looks in the cache for a
matching item. A cache hit occurs when the CPU discovers that the memory location is in the cache,
and data is read from the cache. A cache miss occurs when the CPU cannot locate the memory location
in the cache. When a cache miss occurs, the cache creates a new entry and transfers data from main
memory, after which the request is completed using the cache’s contents.
Cache memory performance is usually assessed in terms of a metric known as Hit ratio.
Hit ratio = hit / (hit + miss) = no. of hits/total accesses
Greater cache block size, higher associativity, lower miss rate, lower miss penalty, and lower time to hit
in the cache can all help to enhance cache performance.

A virtual or logical address is the address created by the CPU in a virtual memory system. The needed
mapping is done by a specific memory control unit, sometimes known as the memory management
unit, which might have a distinct physical address. According to system needs, the mapping function
can be modified during programme execution.
Because of the distinction created between the logical (virtual) and physical (physical) address spaces,
the former can be as big as the CPU’s addressing capabilities, while the latter can be considerably less.
Only the active portion of the virtual address space is mapped to physical memory, while the rest is
transferred to the bulk storage device. If the requested information isn’t available, it is accessible and
execution continues since it is in the Main Memory (MM).

Early computers’ programing flexibility was severely limited due to a finite amount of main memory as
compared to programme sizes. Small main memory volumes made big programme execution difficult
and prevented flexible memory space management in the case of several co-existing processes. It was
inconvenient since programmers had to spend a lot of time figuring out how to distribute data and code
between the main memory and the auxiliary storage.
Virtual memory gives a computer programmer an addressing space that is several times bigger than
the main memory’s physically available addressing area. Virtual addresses, which might be considered
artificial in certain ways, are used to place data and instructions in this area.
Data and instructions are stored in both the main memory and the auxiliary memory in reality (usually
disc memory). It’s done under the watchful eye of the virtual memory control system, which oversees
the real-time insertion of data based on virtual addresses. This system fetches data and instructions
requested by currently running programmes to main memory automatically (i.e. without the intervention
of a programmer). The virtual memory’s overall organisation is depicted in Figure 6:

Operating system

Exchange Auxiliary
Address Main
Processor store
translator memory
Physical (disk)
Virtual
Virtual mem. address
address
control

The address space of virtual memory is split into pieces with predetermined sizes and identifiers that are
the sequential numbers of these fragments in the virtual memory’s collection of fragments. Depending
on the kind of virtual memory used, the sequences of virtual memory addresses that correspond to these
pieces are referred to as pages or segments. The number of the relevant fragment of the virtual memory
address space and the word or byte number in the supplied fragment make up a virtual memory address.
For modern virtual memory systems, we differentiate the following options:
 Paged virtual memory
 Segmented virtual memory
 Segmented virtual memory with paging
When accessing data stored at a virtual address, address translation is used to translate the virtual
address to a physical memory address. Prior to beginning to translate, the virtual memory system
verifies whether the segment or page carrying the requested word or byte is available in primary
memory. Tests of page or segment descriptors in corresponding tables in the main memory are used. If
the test result is negative, the requested page or segment is allocated a physical address sub-space in
main memory, and it is then loaded from the auxiliary store into this address sub-space.
The virtual memory system then updates the page or segment descriptions in the descriptor tables and
gives the processor instruction that emitted the virtual address access to the requested word or byte.
Today, the virtual memory control system is partially realised as a hardware and software system.
Computer hardware is responsible for accessing descriptor tables and translating virtual to physical
addresses. The operating system is responsible for retrieving lost pages or segments and updating their
descriptors, although it is aided by specific memory management hardware. This hardware typically
consists of a specific functional unit for virtual memory management and special functional blocks for
virtual address translation computations.
The virtual address of a paged memory is made up of two parts: the page number and the word (byte)
displacement (offset). The amount of words (bytes) on a page is determined by the page’s fixed size. It’s a
two-digit number. For each virtual memory address space, a page table is kept in main memory. A page
descriptor is used to characterise each page in this table. First and foremost, a page descriptor includes
the page’s current physical address. It might be a memory address or a location in the auxiliary storage.
The main memory is split into frames, which are regions of the same size as a page. The physical
address of a page is the first address of the frame that the page occupies. The auxiliary store address is
determined in a method that matches to the kind of memory used as the auxiliary store (usually a disk).
Figure 7 showsb virtual address translation scheme for paging:

virtual address
page # offset Page table Main memory
s p

page table address


frame 0

ster. r
s frame1
page table base +
address

control bits
frame n
frame address for the page p

page descriptor
physical address
r p

In addition, a page descriptor has a number of control bits. They are the ones who decide on the page’s
status and kind. A page existence bit in main memory (page status), the acceptable access code, a
modification registration bit, a swapping lock bit, and an operating system inclusion bit are examples
of control bits.
Address translation converts a virtual address of a word (byte) into a physical address in the main
memory. The page description is used to complete the translation. The descriptor gets found inside the
page table by linking the base page table address with the page number supplied in the virtual address
addition. The page status is read in the descriptor. The frame address is read from the descriptor if the
page is in the main memory. Following that comes the frame address.
The word (byte) offset from the virtual address is used to index the data. The physical address that results
is utilised to access data in the main memory. If the requested page does not exist in main memory, the
programme’s execution is halted and the “missing page” exception is thrown. The operating system
handles this exception. As a result, the missing page is transferred from the auxiliary store to the
main memory, and the address of the allocated frame is saved in the page descriptor with a control bit
adjustment. The stopped programme is then restarted, and the required word or byte is accessed.
When a computer system has a large number of users or huge applications (tasks), each user or task
might have their own page table. In this example, the virtual memory control uses a two-level page
table. The Page Table Directory table is kept at the first level. It includes the base addresses for all of the
system’s page tables. Three fields are included in the virtual address: the page table number in the page
table directory, the requested page number, and the word (byte) offset.
Figure 8 shows the virtual address translation scheme with two-level page tables:

virtual address
Page table # Page # Offset

Page table directory Page tables


Page table
directory base

Physical
Page table base address +
address

page descriptor Frame address

Segmented memory is another form of virtual memory. Programmes are constructed using this type of
virtual memory based on segments defined by a programmer or a compiler. Segments have their own
distinct IDs, lengths, and address spaces.
Data or instruction needs to be written in segments at successive addresses in the memory. Other users’
access privileges and owners have been identified by segments. Segments can be “private” for a single
person or “shared,” meaning they can be accessed by other users. Segment parameters may be modified
while the application is running, allowing you to specify the segment length and restrictions for mutual
access by many users on the fly.
The names and lengths of segments are used to arrange them in a shared virtual address space.
They can be found in either the main memory or the auxiliary store, which is often disc memory. The
operating system’s segmented memory control mechanism automatically fetches segments requested
by presently running applications and stores them in main memory.
In a system with multiple users, segmentation is a technique to extend the address space of user
programmes as well as a tool for intentional organised programme organisation with specified access
rights and segment protection.
Figure 9 shows different segments in a programme:

Segment 1 Segment 2 Segment 3 Segment 4 Segment 5


0
Symbol Source Constants Object Stack
table code code
4K

8K

12K

16K

In segmentation, the virtual address is consists of two fields: one is a segment number and other is
a word displacement inside the segment. The segment table stores a description for each segment. The
parameters of a segment, such as control bits, segment address, segment length, and protection bits,
are determined in a descriptor. Usually, the control bit contains the main memory bit, a segment type
code, authorised access type code, and size extension control. The protection bits stores the segment’s
privilege code in the data protection scheme.
When attempting to access the contents of a segment, the privilege level of the accessing programme is
compared to the privilege level of the segment, and the access rules are verified. Access to the segment
is prohibited if the access rules are not followed, and the exception “segment access rules violation” is
thrown. Figure 10 shows the virtual address translation scheme with segmentation:

virtual address
segment # offset

Segment table
control bits segment address length protection

Comparison
segment descriptor
in limit
+

physical address
Segmented virtual memory with paging is the third form of virtual memory. Segments are paged in this
memory, which means they contain the number of pages specified by a programmer or compiler. In
segmented paged virtual memory, a virtual address consists of three parts, namely, a segment number,
a page number, and a word or byte offset on a segmented page.
The segment table, which includes segment descriptors, and the segment page table, which contains
descriptors of pages belonging to the segment, are used to translate virtual addresses into physical
addresses, as illustrated in Figure 11:

virtual address
segment # page # offset Main memory

Segment page
tables
Segment table frames
segment table base +
+
address

+
physical
segment descriptor address
word
(byte)
segment page table
page frame
base address
descriptor address

By indexing the base address of the segment table, a segment number is utilised to choose a segment
descriptor. This base address is usually saved in a special register (segment table address pointer
register) that is loaded before the application starts. The address of the page table for the specified
segment, the segment size in pages, control bits, and protection bits are all contained in a segment
descriptor.
The existence bit in the main memory, a segment type code, a code of authorised access type, and
extension control bits are all contained in the control bits. The privilege level of the segme nt in the
general protection system is stored in the protection bits. Figure 12 shows the segment descriptor
structure in segmented virtual memory with paging:

Protection Segment
Control bits length Segment page table address
bits (in pages)
Process always uses virtual addresses. The Memory Management Unit (MMU) (a part of CPU) does
address translation. Caches recently used translations in a Translation Lookaside Buffer (Page Table
Cache). The page tables are stored in OS’s virtual address space. The page tables are (at best) present
in the MM – One main memory reference per address translation. To translate a virtual memory
address, the MMU has to read the relevant page table entry out of memory. Figure 13 shows the address
translation table:

MAIN MEMORY HARD DISK


Physical Page Numbers Disk Addresses V
Virtual Address VPN 0
VPN PO VPN 1

PPN Disk Address

PPN PO
Physical Address VPN N

Conclusion

 Cache Memory is a type of memory that operates at extremely fast speeds.


 Cache memory is a form of memory that works as a buffer between the RAM and the CPU and is
highly fast
 Various types of mapping functions that is used by cache memory are as follows:
 Direct Mapping
 Associative Mapping
 Set-associative Mapping
 Page replacement is obvious in demand paging, and there are many page replacement algorithms.
Executing an algorithm on a particular memory reference for a process and calculating the number
of page faults assess an algorithm. The sequence of memory references is called reference string.
 A virtual or logical address is the address created by the CPU in a virtual memory system.
 Virtual memory gives a computer programmer an addressing space that is several times bigger
than the main memory's physically available addressing area.
 Virtual addresses, which might be considered artificial in certain ways, are used to place data and
instructions in this area.
 Depending on the kind of virtual memory used, the sequences of virtual memory addresses that
correspond to these pieces are referred to as pages or segments.
 The number of the relevant fragment of the virtual memory address space and the word or byte
number in the supplied fragment make up a virtual memory address.
 For modern virtual memory systems, we differentiate the following options:
 Paged virtual memory
 Segmented virtual memory
 Segmented virtual memory with paging
 The Memory Management Unit (MMU) (a part of CPU) does address translation.

 It is a form of memory that works as a buffer between the RAM and the CPU and is
highly fast.
 This method maps each block of main memory into only one potential cache line.
 It is utilised to store the content and addresses of the memory word in this
form of mapping.
 This type of mapping is an improved version of direct mapping that
eliminates the disadvantages of direct mapping.
 It is a memory management technique that helps in executing programmes that
are larger in size than the physical main memory.
 A set of techniques is provided by the virtual memory to execute the programme
that is not present entirely on the memory.

1. Cache memory is a type of memory that operates at extremely fast speeds. Discuss.
2. Various types of mapping functions that is used by cache memory are direct mapping, associative
mapping, and set-associative mapping. What do you understand by direct mapping?
3. Page replacement is obvious in demand paging, and there are many page replacement algorithms.
Discuss the random replacement algorithm.
4. A virtual or logical address is the address created by the CPU in a virtual memory system. Discuss.
5. What do you understand by segmented virtual memory?

1. A computer’s CPU can generally execute instructions and data quicker than it can acquire them from
a low-cost main memory unit. As a result, the memory cycle time becomes the system’s bottleneck.
Using cache memory is one technique to minimise memory access time. This is a tiny, quick memory
that sits between the CPU and the bigger, slower main memory. Refer to Section Cache Memory
2. Direct mapping is the most basic method, which maps each block of main memory into only one
potential cache line. Assign each memory block to a specified line in the cache via direct mapping. If
a memory block filled a line before and a new block needs to be loaded, then old block gets deleted.
Refer to Section Cache Memory
3. In random-replacement algorithm the replaced page is chosen at random. This is the memory
manager randomly choosing any loaded page. Because the policy calls for selecting the page to be
replaced by choosing the page from any frame with equal probability, it uses no knowledge of the
reference stream (or the locality) when it selects the page frame to replace. Refer to Section Cache
Memory
4. A virtual or logical address is the address created by the CPU in a virtual memory system. The
needed mapping is done by a specific memory control unit, sometimes known as the memory
management unit, which might have a distinct physical address. According to system needs, the
mapping function can be modified during programme execution. Refer to Section Virtual Memory
5. Segmented memory is another form of virtual memory. Programmes are constructed using this type
of virtual memory based on segments defined by a programmer or a compiler. Segments have their
own distinct IDs, lengths, and address spaces. Data or instruction needs to be written in segments at
successive addresses in the memory. Other users’ access privileges and owners have been identified
by segments. Refer to Section Virtual Memory

@
 https://www.cmpe.boun.edu.tr/~uskudarli/courses/cmpe235/Virtual%20Memory.pdf
 https://www.msuniv.ac.in/Download/Pdf/19055a11803e457

 Do the online research on cache memory and virtual memory and discuss with your classmates
about uses, advantages and disadvantages of cache memory & virtual memory.

You might also like