Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Student Name KADIYAM AARK TEJHA

Student Registration Number AUP23SCMCA119 Class & Section: MCA & D


Study Level : UG/PG PG Year & Term: FIRST YEAR & TERM -2
Subject Name OPERATING SYSTEM-II

Name of the Assessment REFLECTIVE ESSAY

Date of Submission 17-04-2024

REFLECTIVE ESSAY
MEMORY MANAGEMENT
Memory management in an operating system (OS) is the process of managing
primary memory, or random-access memory, to improve concurrency, memory
utilization, and system performance. It keeps track of available memory, memory
allocation, and unallocated memory, and moves processes between primary and
secondary memory.
Role of Memory management
Following are the important roles of memory management in a computer system:
• Memory manager is used to keep track of the status of memory locations,
whether it is free or allocated. It addresses primary memory by providing
abstractions so that software perceives a large memory is allocated to it.
• Memory manager permits computers with a small amount of main memory
to execute programs larger than the size or amount of available memory. It
does this by moving information back and forth between primary memory
and secondary memory by using the concept of swapping.
• The memory manager is responsible for protecting the memory allocated
to each process from being corrupted by another process. If this is not
ensured, then the system may exhibit unpredictable behavior.
• Memory managers should enable sharing of memory space between
processes. Thus, two programs can reside at the same memory location
although at different times.
Memory Management Techniques:
The memory management techniques can be classified into following main
categories:
• Contiguous memory management schemes
• Non-Contiguous memory management schemes
Contiguous memory management schemes:
In a Contiguous memory management scheme, each program occupies a single
contiguous block of storage locations, i.e., a set of memory locations with
consecutive addresses.
• Single contiguous memory management schemes:
The Single contiguous memory management scheme is the simplest memory
management scheme used in the earliest generation of computer systems. In this
scheme, the main memory is divided into two contiguous areas or partitions. The
operating systems reside permanently in one partition, generally at the lower
memory, and the user process is loaded into the other partition.
Non-Contiguous memory management schemes:
In a Non-Contiguous memory management scheme, the program is divided into
different blocks and loaded at different portions of the memory that need not
necessarily be adjacent to one another. This scheme can be classified depending
upon the size of blocks and whether the blocks reside in the main memory or not.

PAGING
Paging is a technique that eliminates the requirements of contiguous allocation of
main memory. In this, the main memory is divided into fixed-size blocks of
physical memory called frames. The size of a frame should be kept the same as
that of a page to maximize the main memory and avoid external fragmentation.
Advantages of paging:
• Pages reduce external fragmentation.
• Simple to implement.
• Memory efficient.
• Due to the equal size of frames, swapping becomes very easy.
• It is used for faster access of data.
Paging is a storage mechanism used in OS to retrieve processes from secondary
storage to the main memory as pages. The primary concept behind paging is to
break each process into individual pages. Thus, the primary memory would also
be separated into frames.
Advantages of Paging
• It is one of the easiest Memory Management Algorithms.
• Paging helps in storing a process at non-contagious locations in the main
memory.
• Paging removes the problem of External Fragmentation.
Disadvantages of Paging
• It may cause Internal Fragmentation.
• More Memory is consumed by the Page Tables.
• It faces a longer memory lookup time.

Segmentation can be used in many real-world applications, including:


• Healthcare: Safeguards patient data
• Gaming: Optimizes resource utilization
• Server farms: Manages memory
• Automotive systems: Enhances onboard system performance
• Embedded systems: Optimizes memory for IoT devices
Segmentation is a memory administration approach used in operating
systems that divides memory into multiple-sized segments. A process
can be assigned to each component, referred to as a segment. Paging
does not convey the user's process perspective; Segmentation does.
Segmentation can improve an operating system's performance. It gives the user a
view of a process, and the total duration of the program and memory segments
are determined by the function of the section in the user software.
There are two types of segmentation:
• Simple segmentation
Each process is divided into segments, and all segments are loaded into memory
at run time.
• Virtual memory segmentation
Each process is divided into segments, but the segmentation may not happen all
at once.
Segmentation can also:
• Prevent one program from accessing the segments of another program
• Allow multiple programs to share the same segment.

VIRTUAL MEMORY
Virtual memory is a memory management technique that uses hardware and
software to create the illusion of a large memory on a computer. It's a common
technique used in operating systems (OS) to compensate for physical memory
shortages. For example, when an application is in use, data from that program is
stored in a physical address using RAM. If the RAM space is needed for
something more urgent, data can be swapped out of RAM and into virtual
memory.
Demand Paging is a popular method of virtual memory management. In demand
paging, the pages of a process which are least used, get stored in the secondary
memory.
A page is copied to the main memory when its demand is made or page fault
occurs. There are various page replacement algorithms which are used to
determine the pages which will be replaced.

Advantages of Virtual Memory


1. The degree of Multiprogramming will be increased.
2. User can run large application with less real RAM.
3. There is no need to buy more memory RAMs.

Disadvantages of Virtual Memory


1. The system becomes slower since swapping takes time.
2. It takes more time in switching between applications.
3. The user will have the lesser hard disk space for its use.

SEGMENTATION
Shared pages are memory pages that can be used by multiple processes
simultaneously in an operating system (OS). They can be used instead of physical
RAM when more memory is needed, and the main advantage is that only one
copy of a shared file exists in memory. This reduces the overhead of pages and
allows for more efficient use of RAM.
A memory-mapped file is a file that stores its contents in virtual memory, which
allows an application to modify the file by reading and writing directly to the
memory. The operating system transparently loads parts of the file into physical
memory as the application access them, and releases them again if not needed
anymore.
Virtual memory is a memory management technique that allows a computer to
use secondary memory like the hard disk as if it were part of the main
memory. This helps when the physical memory is insufficient to handle the
tasks. Virtual memory uses both hardware and software to enable this.
In Unix, virtual memory managers create a virtual address space in secondary
memory, which determines which part of address space to load into physical
memory at any given time. This allows processes to access a large amount of
memory even if the physical memory is limited.
Virtual memory segmentation is a memory management technique in computer
systems that divides processes into multiple segments. The number of segments
is represented by "n", and not all segments are divided
simultaneously. Segmentation may or may not take place at the run time of the
program.
There are three different ways to implement virtual memory:
• Paging: Uses fixed-size pages to move between main memory and
secondary storage
• Segmentation: Uses varying-sized segments
• Segmented paging: Combines paging and segmentation

PROTECTION AND SECURITY


Protection in an operating system (OS) is a set of mechanisms that ensure the
system's security and integrity. It prevents unauthorized access, misuse, or
modification of the OS and its resources.
Security protects the integrity, confidentiality, and availability of an operating
system from threats, viruses, worms, malware, and remote hacker intrusions. It
also safeguards computer assets from being stolen, edited, or deleted.
Security measures you can take at the operating system level:
• Update your OS regularly: Regular OS updates can protect your system
from security threats.
• Use strong passwords and encryption: Password strength can make your
device immune to cyberattacks.
• Install antivirus software: Antivirus software can help protect your system.
• Enable firewall and network security: A firewall is a software program that
monitors and controls incoming and outgoing network traffic based on
predefined security rules.

ACCESS CONTROL MECHANISMS:


An access control mechanism is a data security process that allows organizations
to manage who can access their resources and data. Access control is important
for preventing data breaches, monetary losses, and privacy concerns. It also helps
organizations fight attack vectors like phishing attacks, on-path attacks, KRACK
attacks, and buffer overflow attacks.
AUTHENTICATION:
Authentication is the process of verifying a user's identity by comparing their
credentials to a database of authorized users. This database can be located on the
local operating system server or an authentication server. If the credentials match
and the authenticated entity is authorized to use the resource, the user gains
access.
DISK SPACE ALLOCATION
Disk space allocation is the process of assigning disk blocks to files, which is
used to effectively use disk space and allow for quick file access.
There are three main allocation methods:
• Contiguous allocation: Files are assigned to contiguous areas of secondary
storage, with the user specifying the size of the area needed to hold a
file. Each file is contained in a single section of the disk memory, which
can be divided into fixed-sized or variable-sized partitions. Fixed-sized
partitions have sizes that cannot be changed, which can lead to internal
fragmentation.
• Linked allocation: Uses symbolic links to assign standard device names
and to point to the device.
• Indexed allocation: Another method of allocation.

IMPLEMENTING FILE ACCESS:


An operating system (OS) controls file access by setting permissions for files and
directories. These permissions can grant or deny access to specific files and
directories. When permission is granted, you can access and perform any function
on the file or directory.
FILE ACCESS METHODS IN AN OS:
• Sequential access: Allows access to the file records in sequential order, one
after the other.
• Direct access: Allows random access to the file blocks.
• Indexed sequential access: A modification of sequential access that
contains an index that holds pointers to various other file blocks.
• Relative Record Access: Not ideal for files with frequent updates or
variable-length records.
• Content-Addressable Access (CAA): Ideal for searching large databases or
file systems because it allows for efficient searching based on the content
of the records or blocks.
• Random access: Commonly used in devices such as hard drives, solid-state
drives, and USB drives.

LIMITATIONS OF DISTRIBUTED SYSTEM

A distributed system is a group of autonomous computer systems that are


physically distinct but linked by a centralized computer network that uses
distributed system software. The autonomous computers will interact with each
system by exchanging files and resources and completing the tasks assigned to
them.
Each node in a distributed system functions independently and has the ability to
decide locally based on its own resources and state. Through shared memory or
message forwarding, nodes can communicate with one another and coordinate
their actions and exchange information. Depending on the needs of the system,
this communication may be synchronous or asynchronous.
Distributed systems have inherent limitations such as the absence of shared
memory, global clock synchronization issues, high setup cost and security risks,
and communication latency and network congestion.
VIRTUAL FILE SYSTEM
A virtual file system (VFS) is a layer in an operating system that provides a
uniform interface to access different types of file storage frameworks. For
example, a VFS can be used to access local and network storage devices
transparently.
VFSs are memory-based and provide access to special kernel information and
facilities. Most VFSs do not use file system disk space, but some use a file system
on the disk to contain the cache, or use the swap space on a disk.
A Linux file system is a structured collection of files on a partition or disk drive
that manages file size, name, and creation date, among other information. The
Linux file system has a hierarchical file structure with a root directory and
subdirectories, and all other directories can be accessed from the root directory.
The Linux file system has the following sections:
• The root directory (/)
• A specific data storage format (EXT3, EXT4, BTRFS, XFS, and so on)
• A partition or logical volume having a particular file system

UNIX FILE SYSTEM

The Unix file system is a hierarchical file system that consists of the root file
system and all the file systems that are added to it. Files are members of a
directory, and each directory is in turn a member of another directory at a higher
level.
The root directory is the top-level directory in the Unix file system. It is
represented by a forward slash (/). All other directories and files are located below
the root directory.

KERNEL ORGANIZATION & DESIGN


The kernel is the core of an operating system and provides the basic
architectural model for resource and process scheduling, memory
management, networking, and device driver interfaces and
organization.
Kernels fall into three architectures:
• Monolithic: Executes all of its code in the same address space (kernel
space).
• Microkernel: Tries to run most of its services in user space, aiming to
improve maintainability and modularity of the codebase.
• Hybrid: A combination of both Monolithic and Microkernels.
MONOLITHIC VS MICROKERNEL
The main difference between monolithic and microkernel architectures is how they organize
components within kernel and user spaces.
Differences between monolithic and microkernel architectures:
• System services
Monolithic kernels run all system services in kernel space, while microkernels
only run the most basic services in kernel space.
• Performance
Monolithic kernels are generally faster and more efficient than microkernels.
• Security
Microkernels can reduce the attack surface and privilege level of system
components.
• Modularity
Microkernels separate system functions into independent modules that can be
added or removed without affecting the kernel.
KERNEL DESIGN:
The kernel is the core of an operating system (OS) that manages the computer's
operations and hardware. It's responsible for providing basic services for the OS,
including:
• Managing resources
• Managing processes
• Providing interfaces
• Managing hardware

NETWORK AND BANDWIDTH


Bandwidth and latency are two different components of internet speed that affect
how fast your internet feels. Bandwidth is the maximum amount of data that can
be transferred in a given amount of time, typically measured in bits, kilobits,
megabits, or gigabits per second. Latency is the time it takes for data to travel
from one point to another across a network. Networks with high latency have
slower response times, while networks with low latency have faster response
times. High bandwidth and low latency can lead to greater throughput.
• Scalability
The system's resources are limited, and may become full or saturated under
increased load
• Security problems due to sharing
The network may be overloaded, and some data packets may be corrupted
• Network issues
Network stability and bandwidth problems can occur, including network delays
and packet loss.
• Memory leaks
The operating system allocates memory to processes, and if those processes don't
return unneeded memory, the system will slow down and eventually fail
• Encryption
Encryption requires additional processing power, which can slow down the
system and reduce its overall efficiency
• Concurrency
When more than one user is accessing a single system simultaneously, this can
cause data inconsistency and even system crashes
DATA SERIALIZATION:
Data serialization is the process of converting data objects into a byte stream for
storage, distribution, and transfer on physical devices. The reverse process,
deserialization, is the process of constructing a data structure or object from a
series of bytes. Serialization and deserialization work together to allow data to be
stored and transferred.
Some data transfer operations:
• Input/Output
• Concurrency
• OpenCL
• Application Programming Interface
• Central Processing Unit
• Digital Signal Processor
• Graphics Processing Unit
• Kernel Execution
LOGICAL CLOCK:
Lamport's logical clocks are mathematical functions that assign numbers to
events to create a partial or total ordering of events. They are needed because
there is no global clock in a distributed operating system.
VECTOR CLOCK:
Vector Clock is an algorithm that generates partial ordering of events and detects
causality violations in a distributed system. These clocks expand on Scalar time
to facilitate a causally consistent view of the distributed system, they detect
whether a contributed event has caused another event in the distributed system. It
essentially captures all the causal relationships. This algorithm helps us label
every process with a vector (a list of integers) with an integer for each local clock
of every process within the system. So, for N given processes, there will be
vector/ array of size N.
LAMPORT TIMESTAMPS:
Lamport timestamps, also known as the logical clock algorithm, is a simple
algorithm that determines the order of events in a distributed computer
system. Leslie Lamport proposed the algorithm in the 1970s and it has been used
in almost all distributed systems since then.

You might also like