Notes for 4th Unit

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

4th Unit Memory Management

Resident Monitor:

The resident monitor works like an operating system that controls the instructions and performs all
necessary functions. It also works like job sequencer because it also sequences the job and sends them
to the processor.

After scheduling the job Resident monitors loads the programs one by one into the main memory
according to their sequences. One most important factor about the resident monitor is that when the
program execution occurred there is no gap between the program execution and the processing is going
to be faster.

The Resident monitors are divided into 4 parts as:

1. Control Language Interpreter

2. Loader

3. Device Driver

4. Interrupt Processing

These are explained as following below.

1. Control Language Interpreter:


The first part of the Resident monitor is control language interpreter which is used to read and
carry out the instruction from one level to the next level.

2. Loader:
The second part of the Resident monitor which is the main part of the Resident Monitor is
Loader which Loads all the necessary system and application programs into the main memory.

3. Device Driver:
The third part of the Resident monitor is Device Driver which is used to manage the connecting
input-output devices to the system. So basically it is the interface between the user and the
system. it works as an interface between the request and response. request which user made,
Device driver responds that the system produces to fulfill these requests.

4. Interrupt Processing:
The fourth part as the name suggests, it processes the all occurred interrupt to the system.

Fixed Partitioning

The earliest and one of the simplest technique which can be used to load more than one processes into
the main memory is Fixed partitioning or Contiguous memory allocation.
In this technique, the main memory is divided into partitions of equal or different sizes. The operating
system always resides in the first partition while the other partitions can be used to store user processes.
The memory is assigned to the processes in contiguous way.

In fixed partitioning,

1. The partitions cannot overlap.

2. A process must be contiguously present in a partition for the execution.

There are various cons of using this technique.

1. Internal Fragmentation

If the size of the process is lesser then the total size of the partition then some size of the partition get
wasted and remain unused. This is wastage of the memory and called internal fragmentation.

As shown in the image below, the 4 MB partition is used to load only 3 MB process and the remaining 1
MB got wasted.

2. External Fragmentation

The total unused space of various partitions cannot be used to load the processes even though there is
space available but not in the contiguous form.

As shown in the image below, the remaining 1 MB space of each partition cannot be used as a unit to
store a 4 MB process. Despite of the fact that the sufficient space is available to load the process, process
will not be loaded.

3. Limitation on the size of the process

If the process size is larger than the size of maximum sized partition then that process cannot be loaded
into the memory. Therefore, a limitation can be imposed on the process size that is it cannot be larger
than the size of the largest partition.

4. Degree of multiprogramming is less

By Degree of multi programming, we simply mean the maximum number of processes that can be
loaded into the memory at the same time. In fixed partitioning, the degree of multiprogramming is fixed
and very less due to the fact that the size of the partition cannot be varied according to the size of
processes.
Dynamic Partitioning

Dynamic partitioning tries to overcome the problems caused by fixed partitioning. In this technique, the
partition size is not declared initially. It is declared at the time of process loading.

The first partition is reserved for the operating system. The remaining space is divided into parts. The
size of each partition will be equal to the size of the process. The partition size varies according to the
need of the process so that the internal fragmentation can be avoided.
Advantages of Dynamic Partitioning over fixed partitioning

1. No Internal Fragmentation

Given the fact that the partitions in dynamic partitioning are created according to the need of the
process, It is clear that there will not be any internal fragmentation because there will not be any unused
remaining space in the partition.

2. No Limitation on the size of the process

In Fixed partitioning, the process with the size greater than the size of the largest partition could not be
executed due to the lack of sufficient contiguous memory. Here, In Dynamic partitioning, the process size
can't be restricted since the partition size is decided according to the process size.

Disadvantages of dynamic partitioning


External Fragmentation

Absence of internal fragmentation doesn't mean that there will not be external fragmentation.

Let's consider three processes P1 (1 MB) and P2 (3 MB) and P3 (1 MB) are being loaded in the respective
partitions of the main memory.

After some time P1 and P3 got completed and their assigned space is freed. Now there are two unused
partitions (1 MB and 1 MB) available in the main memory but they cannot be used to load a 2 MB
process in the memory since they are not contiguously located.

The rule says that the process must be contiguously present in the main memory to get executed. We
need to change this rule to avoid external fragmentation.

Memory Protection in Operating Systems

Memory protection is a crucial component of operating systems which permits them to avert one
method's storage from being utilized by another. Memory safeguarding is vital in contemporary
operating systems since it enables various programs to run in tandem lacking tampering with their
respective storage space

The primary goal of safeguarding memory is to avert an application from accessing RAM without
permission. Whenever an approach attempts to use memory that it does not have permission to enter,
the computer's operating system will stop and end the process. This hinders the program from obtaining
memory that it should not.

Different Ways of Memory Protection

Segmentation

Memory is segmented into sections, every single one which can have a separate set of access rights. An
OS kernel segment, for instance, might be read-only, whereas a user data segment could have been
designated as read-write.
Paged Virtual Memory

Memory is divided into pages in paged virtual memory, and each page can be saved to its own place in
physical memory. In order to maintain track of where pages are kept, the OS uses a page table. This gives
the operating system the ability to move pages to various parts of physical memory, where they can be
secured against unauthorized access.

Protection keys

Each RAM page has a set of bits called encryption keys. Accessibility to the page can be controlled using
these bits. A protection key could be utilized, for instance, to specify whether or not a document will be
read, written to, or operated

Virtual Memory in Operating System

Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as though
it were part of the main memory. The addresses a program may use to reference memory are
distinguished from the addresses the memory system uses to identify physical storage sites and
program-generated addresses are translated automatically to the corresponding machine addresses.

What is Virtual Memory?

Virtual memory is a memory management technique used by operating systems to give the appearance
of a large, continuous block of memory to applications, even if the physical memory (RAM) is limited. It
allows the system to compensate for physical memory shortages, enabling larger applications to run on
systems with less RAM.

A memory hierarchy, consisting of a computer system’s memory and a disk, enables a process to operate
with only some portions of its address space in memory. A virtual memory is what its name indicates- it
is an illusion of a memory that is larger than the real memory. We refer to the software component of
virtual memory as a virtual memory manager. The basis of virtual memory is the noncontiguous memory
allocation model. The virtual memory manager removes some components from memory to make room
for other components.

Types of Virtual Memory

In a computer, virtual memory is managed by the Memory Management Unit (MMU), which is often
built into the CPU. The CPU generates virtual addresses that the MMU translates into physical addresses.

There are two main types of virtual memory:

• Paging

• Segmentation

Paging

Paging divides memory into small fixed-size blocks called pages. When the computer runs out of RAM,
pages that aren’t currently in use are moved to the hard drive, into an area called a swap file. The swap
file acts as an extension of RAM. When a page is needed again, it is swapped back into RAM, a process
known as page swapping. This ensures that the operating system (OS) and applications have enough
memory to run.

Demand Paging: The process of loading the page into memory on demand (whenever a page fault
occurs) is known as demand paging. The process includes the following steps are as follows:

• If the CPU tries to refer to a page that is currently not available in the main memory, it generates
an interrupt indicating a memory access fault.

• The OS puts the interrupted process in a blocking state. For the execution to proceed the OS
must bring the required page into the memory.

• The OS will search for the required page in the logical address space.

• The required page will be brought from logical address space to physical address space. The page
replacement algorithms are used for the decision-making of replacing the page in physical
address space.

• The page table will be updated accordingly.

• The signal will be sent to the CPU to continue the program execution and it will place the process
back into the ready state.

What is Swapping?

Swapping is a process out means removing all of its pages from memory, or marking them so
that they will be removed by the normal page replacement process. Suspending a process
ensures that it is not runnable while it is swapped out. At some later time, the system swaps
back the process from the secondary storage to the main memory. When a process is busy
swapping pages in and out then this situation is called thrashing.


Threshing

thrashing occurs when a computer's virtual memory resources are overused, leading to a constant state
of paging and page faults, inhibiting most application-level processing. It causes the performance of the
computer to degrade or collapse. The situation can continue indefinitely until the user closes some
running applications or the active processes free up additional virtual memory resources.

To know more clearly about thrashing, first, we need to know about page fault and swapping.

o Page fault: We know every program is divided into some pages. A page fault occurs when a
program attempts to access data or code in its address space but is not currently located in the
system RAM.

o Swapping: Whenever a page fault happens, the operating system will try to fetch that page from
secondary memory and try to swap it with one of the pages in RAM. This process is called
swapping.

Thrashing is when the page fault and swapping happens very frequently at a higher rate, and then the
operating system has to spend more time swapping these pages. This state in the operating system is
known as thrashing. Because of thrashing, the CPU utilization is going to be reduced or negligible.

Causes of Thrashing

Programs or workloads may cause thrashing, and it results in severe performance problems, such as:

o If CPU utilization is too low, we increase the degree of multiprogramming by introducing a new
system. A global page replacement algorithm is used. The CPU scheduler sees the decreasing
CPU utilization and increases the degree of multiprogramming.

o CPU utilization is plotted against the degree of multiprogramming.

o As the degree of multiprogramming increases, CPU utilization also increases.

o If the degree of multiprogramming is increased further, thrashing sets in, and CPU utilization
drops sharply.

o So, at this point, to increase CPU utilization and to stop thrashing, we must decrease the degree
of multiprogramming.

How to Eliminate Thrashing

Thrashing has some negative impacts on hard drive health and system performance. Therefore, it is
necessary to take some actions to avoid it. To resolve the problem of thrashing, here are the following
methods, such as:

o Adjust the swap file size:If the system swap file is not configured correctly, disk thrashing can
also happen to you.

o Increase the amount of RAM: As insufficient memory can cause disk thrashing, one solution is to
add more RAM to the laptop. With more memory, your computer can handle tasks easily and
don't have to work excessively. Generally, it is the best long-term solution.
o Decrease the number of applications running on the computer: If there are too many
applications running in the background, your system resource will consume a lot. And the
remaining system resource is slow that can result in thrashing. So while closing, some
applications will release some resources so that you can avoid thrashing to some extent.

o Replace programs: Replace those programs that are heavy memory occupied with equivalents
that use less memory.

Cache Memory in Computer Organization

Cache memory is a small, high-speed storage area in a computer. The cache is a smaller and faster
memory that stores copies of the data from frequently used main memory locations. There are various
independent caches in a CPU, which store instructions and data. The most important use of cache
memory is that it is used to reduce the average time to access data from the main memory.

By storing this information closer to the CPU, cache memory helps speed up the overall processing time.
Cache memory is much faster than the main memory (RAM). When the CPU needs data, it first checks
the cache. If the data is there, the CPU can access it quickly. If not, it must fetch the data from the slower
main memory.

Characteristics of Cache Memory

• Cache memory is an extremely fast memory type that acts as a buffer between RAM and the
CPU.

• Cache Memory holds frequently requested data and instructions so that they are immediately
available to the CPU when needed.

• Cache memory is costlier than main memory or disk memory but more economical than CPU
registers.

• Cache Memory is used to speed up and synchronize with a high-speed CPU.

Levels of Memory

• Level 1 or Register: It is a type of memory in which data is stored and accepted that are
immediately stored in the CPU. The most commonly used register is Accumulator, Program
counter, Address Register, etc.

• Level 2 or Cache memory: It is the fastest memory that has faster access time where data is
temporarily stored for faster access.
• Level 3 or Main Memory: It is the memory on which the computer works currently. It is small in
size and once power is off data no longer stays in this memory.

• Level 4 or Secondary Memory: It is external memory that is not as fast as the main memory but
data stays permanently in this memory.

Cache Performance

When the processor needs to read or write a location in the main memory, it first checks for a
corresponding entry in the cache.

• If the processor finds that the memory location is in the cache, a Cache Hit has occurred and
data is read from the cache.

• If the processor does not find the memory location in the cache, a cache miss has occurred. For
a cache miss, the cache allocates a new entry and copies in data from the main memory, then
the request is fulfilled from the contents of the cache.

locality of reference

Locality of reference refers to the tendency of the computer program to access the same set of memory
locations for a particular time period. The property of Locality of Reference is mainly shown by loops and
subroutine calls in a program.

On an abstract level there are two types of localities which are as follows −

Temporal locality

This type of optimization includes bringing in the frequently accessed memory references to a nearby
memory location for a short duration of time so that the future accesses are much faster.

For example, if in an instruction set we have a variable declared that is being accessed very frequently
we bring in that variable in a memory register which is the nearest in memory hierarchy for faster access.

Spatial locality

This type of optimization assumes that if a memory location has been accessed it is highly likely that a
nearby/consecutive memory location will be accessed as well and hence we bring in the nearby memory
references too in a nearby memory location for faster access.

For example, traversal of a one-dimensional array in any instruction set will benefit from this
optimization.

Using these optimizations we can greatly improve upon the efficiency of the programs and can be
implemented on hardware level or on software level.

Let us see the locality of reference relationship with cache memory and hit ratio.

Relationship with Cache memory


Cache is a specially designed faster but smaller memory area, generally used to keep recently referenced
data and data near recently referenced data, which can lead to potential performance increases.

Data in cache does not necessarily correspond to data that is spatially close in main memory. However,
data elements are brought into cache one cache line at a time. This means that spatial locality is again
important. If one element is referenced, a few neighbour elements will also be brought into cache.

Finally, temporal locality plays a role on the lowest level, since results that are referenced very closely
together can be kept in the machine registers. Programming languages such as C allow the programmer
to suggest that certain variables are kept in registers.

You might also like