Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Explain Unix file system.

What are the different file types available


with UNIX?
The Unix file system is a hierarchical structure that organizes files and
directories in a tree-like structure. The root directory is denoted by a
forward slash (/) and all other directories and files are located
underneath it.

In Unix, a file is a collection of data that is stored on a storage device


such as a hard disk or SSD. A directory, also known as a folder, is a
container for files and other directories.

Unix supports several different file types, including:

1. Regular files - These are the most common type of file in Unix
and contain data such as text, images, and executable code.
2. Directories - These are containers for files and other directories.
3. Symbolic links - These are files that point to another file or
directory.
4. Device files - These represent hardware devices such as printers
or disk drives.
5. Named pipes - These are used for inter-process communication
between programs.
6. Sockets - These are used for network communication between
programs.

Each file in Unix has a set of permissions that determines who can
read, write, or execute the file. The permissions are divided into three
categories: user, group, and other, and can be set using the chmod
command.
b) How are Files Systems organized with UNIX ? Explain with an
example.
Unix file systems are organized hierarchically, with the root directory at
the top of the hierarchy, and all other directories and files located
beneath it. The root directory is denoted by a forward slash (/) and all
directories are separated by forward slashes.

For example, consider a Unix file system that contains a directory


named "home" that contains two subdirectories named "user1" and
"user2". The file system hierarchy would look like this:

/
|--home
|--user1

|--user2

In this example, the "home" directory is a subdirectory of the root


directory, and "user1" and "user2" are subdirectories of the "home"
directory.

Each directory in the file system can contain files and other
directories. For example, the "user1" directory might contain a file
named "file1.txt" and a subdirectory named "documents", which
contains several other files and subdirectories. The file system
hierarchy would then look like this:

/
|--home
|--user1
|--file1.txt
|--documents
|--document1.doc
|--document2.doc
|--subdirectory1
|--subdocument1.doc
|--subdocument2.doc
|--subdirectory2
|--subdocument3.doc
|--subdocument4.doc

|--user2

In this example, the "documents" subdirectory contains several files


and subdirectories, which can themselves contain additional files and
subdirectories. This hierarchical organization allows for efficient
organization and retrieval of files and directories within the file
system.

Q.2 a) Explain process management along with relevant commands


used with it in Linux.
Process management in Linux involves creating, monitoring, and
controlling the execution of processes in the operating system. A
process is an instance of a running program, and Linux provides
several commands to manage processes.

1. ps - This command is used to display information about the


currently running processes on the system, such as their process
ID (PID), CPU usage, and memory usage.
2. top - This command provides real-time information about the
system's processes, including their CPU and memory usage. It is
useful for monitoring system performance.
3. kill - This command is used to terminate a running process. It
sends a signal to the process, which can be used to gracefully
shut down the process or force it to terminate immediately.
4. nice - This command is used to adjust the priority of a process.
Processes with a higher priority are given more CPU time than
processes with a lower priority.
5. renice - This command is used to change the priority of a running
process.
6. bg - This command is used to move a suspended process to the
background, allowing other processes to run in the foreground.
7. fg - This command is used to bring a backgrounded process
back to the foreground.
8. nohup - This command is used to run a process in the
background, even if the terminal session is closed. It is often
used for long-running processes that should continue running
even if the user logs out.
9. jobs - This command is used to display a list of the currently
running or suspended jobs.
10. ps aux | grep processname - This command is used to find a
running process by name, where "processname" is the name of
the process.

Overall, process management in Linux provides a flexible and powerful


way to manage the execution of programs on the system, and these
commands can be used to monitor and control the system's resources
effectively.

b) Explain the grep command using c, I & v options. Explain with


examples.
Q.3 a) Explain the advantage of executing processes in the
background.
Executing processes in the background means running a process
without keeping it in the foreground or active on the terminal. Instead,
it is sent to the background, allowing the user to continue using the
terminal for other tasks while the process runs in the background.
There are several advantages to executing processes in the
background:

1. Multitasking: Running processes in the background allows users


to perform multiple tasks simultaneously, without waiting for a
process to complete before moving on to another task.
2. Efficiency: Running processes in the background can improve
efficiency because it enables users to work on multiple tasks at
once. For example, a user can compile code while continuing to
work on another task in the foreground.
3. Flexibility: Running processes in the background provides greater
flexibility and control over how a system's resources are used.
Users can prioritize and control the execution of various
processes on the system, allowing them to optimize
performance and reduce wait times.
4. Convenience: Running processes in the background is
convenient, as it allows users to continue using the terminal for
other tasks without being interrupted by the output of the
running process.

Overall, executing processes in the background provides a significant


advantage in terms of multitasking, efficiency, flexibility, and
convenience, making it an essential feature of modern operating
systems.

b) Explain Demand Paging with a suitable example. What are the


advantages & disadvantages of demand paging?
Demand paging is a memory management technique used in modern
operating systems, where pages of data are loaded into memory on an
as-needed basis. The idea behind demand paging is that not all of a
program's data needs to be loaded into memory at once, as only a
portion of the program's code is likely to be executed at any given
time.

In demand paging, a program's pages are initially loaded into


secondary storage (such as a hard disk) and only loaded into physical
memory when they are needed. When a program attempts to access a
page that is not currently in physical memory, a page fault occurs,
triggering the operating system to load the necessary page into
memory from secondary storage. This process is called demand
paging because pages are loaded into memory only when they are
demanded by the program.

Here's an example of how demand paging works:

Suppose a program is running on a computer with 4GB of memory, but


the program requires 8GB of memory to store all its data. In this case,
the program's pages would be stored in secondary storage (e.g., a
hard disk) and loaded into memory only when needed. As the program
executes, the operating system will load pages into memory on an
as-needed basis. If the program attempts to access a page that is not
currently in memory, a page fault occurs, and the operating system
loads the necessary page from secondary storage into memory.

Advantages of demand paging:

1. Efficient use of memory: Demand paging allows programs to use


memory more efficiently by loading only the pages that are
needed, reducing memory waste.
2. Faster program loading: Demand paging enables programs to
load faster because they don't need to load all pages into
memory initially. Only the pages that are needed are loaded,
reducing startup times.
3. More efficient use of secondary storage: Demand paging
reduces the amount of memory required to store programs,
which, in turn, reduces the amount of secondary storage required
to store those programs.

Disadvantages of demand paging:

1. Overhead: Demand paging introduces overhead into the memory


management process, as the operating system must continually
monitor page usage and load pages into memory when needed.
2. Increased I/O: Demand paging requires frequent I/O operations
to load pages into memory, which can slow down program
execution times.
3. Page faults: Demand paging can lead to page faults, which occur
when a program attempts to access a page that is not currently
in memory. Page faults can significantly reduce program
performance and cause delays in execution.

Overall, demand paging is a useful technique for managing memory in


modern operating systems, but it also has its limitations and can
introduce additional overhead and delays into program execution.

Q.4 a) Write short notes on various page replacement strategies.


Page replacement strategies are techniques used by operating
systems to manage virtual memory when the physical memory is full.
When a page fault occurs, and the physical memory is full, the
operating system must choose which page to remove from memory to
make room for the new page. Here are some common page
replacement strategies:
1. First-In-First-Out (FIFO): In this strategy, the oldest page in
memory is selected for replacement. This method is simple to
implement but can result in poor performance because it does
not consider the usefulness of the page.
2. Least Recently Used (LRU): In this strategy, the page that has not
been accessed for the longest time is selected for replacement.
This method is more effective than FIFO because it considers
the usefulness of the page, but it can be computationally
expensive to implement.
3. Optimal: In this strategy, the page that will not be used for the
longest time in the future is selected for replacement. This
strategy is optimal in that it will always result in the fewest
number of page faults, but it is not practical to implement
because the future use of a page is unknown.
4. Clock (also known as Second-Chance): In this strategy, pages are
assigned a "use bit," and the page with the lowest use bit is
selected for replacement. If a page has been accessed recently,
its use bit is set to 1, and it is given a second chance before
being replaced. This method is simple to implement and strikes
a balance between LRU and FIFO.
5. Random: In this strategy, a random page is selected for
replacement. This method is simple to implement, but it can
result in poor performance because it does not consider the
usefulness of the page.

Overall, the choice of a page replacement strategy depends on the


specific requirements of the system and the tradeoffs between
performance and computational overhead.

b) What is Belady’s Anomaly? In which algo does it occur?


Belady's Anomaly is a phenomenon that can occur in page
replacement algorithms. It is characterized by an increase in the
number of page faults when the number of available frames in
memory is increased.

Belady's Anomaly occurs in some page replacement algorithms, such


as the First-In-First-Out (FIFO) algorithm. In the FIFO algorithm, the
pages that were loaded into memory first are the first ones to be
removed when a page fault occurs. It is possible for increasing the
number of frames allocated to the process to lead to more page
faults, even though having more frames available in memory should
reduce the number of page faults.

Belady's Anomaly happens when a page that was removed from


memory would have been needed again in the future had it remained
in memory. Adding more frames can increase the likelihood of this
happening. The anomaly arises because the removal of a page can
change the sequence in which pages are removed in the future, and
the new sequence may be less favorable.

Belady's Anomaly is a theoretical concept that highlights the


limitations of some page replacement algorithms. In practice, it is
unlikely to occur in real-world scenarios, and modern operating
systems use more advanced algorithms, such as LRU, Clock, and OPT,
which are less prone to this anomaly.

Q.5a) Compare FIFO & LRU page replacement algorithms.


FIFO (First-In-First-Out) and LRU (Least Recently Used) are two
commonly used page replacement algorithms in operating systems.
Here are some differences between them:
1. Page Replacement Criteria: In FIFO, the page that was loaded
into memory first is the first one to be removed when a page
fault occurs. In LRU, the page that has not been accessed for the
longest time is selected for replacement.
2. Performance: LRU usually outperforms FIFO in terms of reducing
the number of page faults. This is because LRU takes into
account the usefulness of the page and tries to keep the most
frequently used pages in memory.
3. Implementation: FIFO is a simple and easy-to-implement
algorithm, whereas LRU is more complex and computationally
expensive. LRU requires keeping track of the time when each
page was last accessed, which requires extra memory and
processing time.
4. Overhead: FIFO has low overhead because it does not require
keeping track of page usage or access times. LRU has higher
overhead because it requires updating the access time of each
page each time it is accessed.
5. Limitations: FIFO can suffer from Belady's Anomaly, which is
when increasing the number of frames allocated to a process
can result in more page faults. LRU is less prone to this problem
but can still have issues if the access pattern of the pages does
not follow a "least recently used" pattern.

Overall, LRU is a more sophisticated and effective page replacement


algorithm than FIFO. However, it is also more complex to implement
and has higher overhead. In practice, modern operating systems use a
combination of different page replacement algorithms, including LRU
and FIFO, to balance performance and complexity.

b) Explain LRU, FIFO, and OPT page replacement policy for given
page sequences. Page frame size is 4.
Page sequence - 2, 3,4,2,1,3,7,5,4,3,2,3,1
Calculate page hit & miss.
LRU, FIFO, and OPT are three common page replacement algorithms
used in operating systems to manage virtual memory.

● LRU (Least Recently Used) - In this algorithm, the page that has
not been used for the longest period of time is replaced when a
page fault occurs.
● FIFO (First-In, First-Out) - In this algorithm, the page that has
been in memory the longest is replaced when a page fault
occurs.
● OPT (Optimal) - In this algorithm, the page that will not be used
for the longest period of time in the future is replaced when a
page fault occurs.

Now let's apply these algorithms to the given page sequence with a
page frame size of 4.

Page sequence: 2, 3, 4, 2, 1, 3, 7, 5, 4, 3, 2, 3, 1

Page frame size: 4

We will use the following notation for the page frames:

● F1, F2, F3, F4 - Page frames


● X - Page fault
1. LRU:

Initially, all page frames are empty.

2, X, X, X
2, 3, X, X
2, 3, 4, X
X, 3, 4, 2
1, 3, 4, 2
1, 3, 7, 2
1, 5, 7, 2
1, 5, 4, 2
X, 5, 4, 3
X, X, 4, 3
X, X, 2, 3

X, 1, 2, 3

Page hit: 5

Page miss: 8

2. FIFO:

Initially, all page frames are empty.

2, X, X, X
2, 3, X, X
2, 3, 4, X
2, 3, 4, X
1, 3, 4, X
1, 3, 7, X
1, 5, 7, X
1, 5, 4, X
X, 5, 4, 3
X, X, 4, 3
X, X, 2, 3

X, 1, 2, 3
Page hit: 4

Page miss: 9

3. OPT:

Initially, all page frames are empty.

2, X, X, X
2, 3, X, X
2, 3, 4, X
X, 3, 4, 2
1, 3, 4, 2
1, 3, 7, 2
1, 5, 7, 2
X, 5, 4, 2
X, X, 4, 3
X, X, 2, 3

X, 1, 2, 3

Page hit: 6

Page miss: 7

As we can see from the above calculations, the OPT algorithm has the
least number of page faults and hence is the optimal algorithm, but it
is not always possible to use OPT as it requires future knowledge of
the page references. LRU and FIFO are the most common page
replacement algorithms used in practice.

You might also like