Project

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 62

TRIBHUVAN UNIVERSITY

EVEREST INNOVATIVE
COLLEGE
 SOALTEEMODE, KATHMANDU
NEPAL

Lab Reports ON

OPERATING SYSTEM
Date:- ……………………..   Total No of Experiments:-  08

Submitted By:                                       Submitted To:


Name: Apeksha Kafle        Department of BCA

Roll No: 1                                       (Er. Saurabh Karn; Lecturer/Supervison)


Faculty: Humanities

Year/Part: II/II
TRIBHUVAN U NIVERSITY
EVEREST INNOVATIVE
COLLEGE
SOLTIMODE, KATHMANDU

NEPAL

Subject: Operating System


LAB REPORT ON
“INSTALLING RHEL SERVER WITH GUI USING VMWARE “

Date:- ……….…………     Experiment No:- 01


Submitted By:                Submitted To:  
Name: Apeksha Kafle             Department of BCA
Roll No: 1          (Er. Saurabh Karn;Lecturer/Supervison)
Faculty: Humanities)
Year/Part: II/II
LAB NO: 1
TITLE: “INSTALLING RHEL SERVER WITH GUI USING VMWARE”
OBJECTIVES:
To install RHEL server with GUI using VMWARE

BACKGROUND THEORY:
VMWARE: It provides cloud computing and virtualization software and services. It also
provides a completely virtualized set of hardware to the
guest operating system. VMware software virtualizes the hardware for a video adapter, a
network adapter, and hard disk adapters. The host provides pass-through drivers for guest USB,
serial, and parallel devices.
Linux Kernel: It is a free, open-source, monolithic, modular and multitasking operating system
kernel. It resembles the UNIX system.
RHEL: RHEL is an acronym for Red Hat Enterprise Linux. It is a linux distribution developed
by Red hat for the commercial market. RHEL is an open source as we can view its source code,
download it and make our own customized versions. Some of the notable Linux distros that are
actually derived from RHEL include CentOS, Oracle Enterprise Linux, Scientific Linux and Pie
Box Enterprise Linux.

OBSERVATION & FINDINGS:


DISCUSSION: In this lab session, we learned how to get use to with VMware and REHL
Linux. We installed VMware and REHL on top of it with GUI enabling the development tools
which helps to set gcc in Linux. We studied how to specify memory during installation along
with how to enable GUI.

CONCLUSION: Hence, we installed RHEL server using VMWARE.


TRIBHUVAN U NIVERSITY
EVEREST INNOVATIVE
COLLEGE
SOLTIMODE, KATHMANDU

NEPAL

Subject: Operating System


LAB REPORT ON
“PROGRAM TO CREATE MULTI-THREADED PROCESS”
Date:- …………………       Experiment No:- 02
Submitted By:                Submitted To:  
Name: Apeksha Kafle             Department of BCA
Roll No: 1          (Er. Saurabh Karn; Lecturer/Supervison
Faculty: Humanities
Year/Part: II/II
LAB NO: 2

TITLE: “PROGRAM TO CREATE MULTI-THREADED PROCESS”


OBJECTIVE:
i. To learn to work with thread
ii. To create multi-threaded process.

BACKGROUND THEORY:
Thread: A thread is a single sequence stream within in a process. Because threads have some
of the properties of processes, they are sometimes called lightweight processes .
Multithreading: Threads are popular way to improve application through parallelism. For
example, in a browser, multiple tabs can be different threads. MS word uses multiple threads,
one thread to format the text, other thread to process inputs, etc.
Threads operate faster than processes due to following reasons:
1) Thread creation is much faster.
2) Context switching between threads is much faster.
3) Threads can be terminated easily
4) Communication between threads is faster.
P-thread: The POSIX thread libraries are a standards based thread API for C/C++. It allows

one to spawn a new concurrent process flow. It is most effective on multi-processor or multi-
core systems where the process flow can be scheduled to run on another processor thus gaining

speed through parallel or distributed processing. Threads require less overhead than "forking"
or spawning a new process because the system does not initialize a new system virtual memory
space and environment for the process.

OBERVATIONS AND FINDINGS


Source Code
Output
DISCUSSION
We created multi-threaded process using P-thread. We viewed how these processes allow the
execution of multiple parts of a program at same time.

CONCLUSION
Hence, we created the multithreaded process.
TRIBHUVAN U NIVERSITY
EVEREST INNOVATIVE
COLLEGE
SOLTIMODE, KATHMANDU

NEPAL

Subject: Operating System


LAB REPORT ON
“PROGRAM TO IMPLEMENT IPC MECHANISM USING SHARED
MEMORY & MESSAGE PASSING”

Date:- ………………….    Experiment No:- 03


Submitted By:                Submitted To:  
Name: Apeksha Kafle             Department of BCA
Roll No: 1          (Er. Saurabh Karn; Lecturer/Supervison
Faculty: Humanities
Year/Part: II/II

LAB NO: 3
TITLE: “PROGRAM TO IMPLEMENT IPC MECHANISM USING
SHARED MEMORY & MESSAGE PASSING”

OBJECTIVE: To implement IPC mechanism using shared memory and message passing
BACKGROUND THEORY:
Inter-process communication (IPC) is a mechanism that allows processes to communicate with each other
and synchronize their actions. The communication between these processes can be seen as a method of
co-operation between them. Processes can communicate with each other through both:

 Shared Memory
 Message passing

The shared memory in the shared memory model is the memory that can be simultaneously


accessed by multiple processes. This is done so that the processes can communicate with each
other. 
Message passing model allows multiple processes to read and write data to the message queue
without being connected to each other. Messages are stored on the queue until their recipient
retrieves them. Message queues are quite useful for interprocess communication and are used by
most operating systems.

OBERVATIONS AND FINDINGS


Shared memory
Source code
Output
Message passing
Source code
Output
DISCUSSION
We implemented interprocess communication mechanism using two methods. Shared memory
method was used where the processes had connection with each other and the next method was
message passing method without the connection between multiple processes.

CONCLUSION
Hence, we implemented IPC mechanism using shared memory and message passing.

TRIBHUVAN U NIVERSITY
EVEREST INNOVATIVE
COLLEGE
SOLTIMODE, KATHMANDU

NEPAL
Subject: Operating System
LAB REPORT ON
“PROGRAM TO SIMULATE PRODUCER CONSUMER PROBLEM
USING SEMAPHORE”
Date:-…………………    Experiment No:- 04
Submitted By:                Submitted To:  
Name: Apeksha Kafle             Department of BCA
Roll No: 1          (Er. Saurabh Karn; Lecturer/Supervison
Faculty: Humanities
Year/Part: II/II

LAB NO: 4

TITLE: “PROGRAM TO SIMULATE PRODUCER CONSUMER


PROBLEM USING SEMAPHORE”

OBJECTIVE: To simulate Producer Consumer problem using Semaphore

BACKGROUND THEORY:
The producer-consumer problem is an example of a multi-process synchronization problem. The
problem describes two processes, the producer and the consumer that shares a common fixed-
size buffer use it as a queue.
The producer’s job is to generate data, put it into the buffer, and start again. At the same time,
the consumer is consuming the data (i.e., removing it from the buffer), one piece at a time.
Problem: Given the common fixed-size buffer, the task is to make sure that the producer can’t
add data into the buffer when it is full and the consumer can’t remove data from an empty buffer.
Solution: The producer is to either go to sleep or discard data if the buffer is full. The next time
the consumer removes an item from the buffer, it notifies the producer, who starts to fill the
buffer again. In the same manner, the consumer can go to sleep if it finds the buffer to be empty.
The next time the producer puts data into the buffer, it wakes up the sleeping consumer.

OBERVATIONS AND FINDINGS


Output

DISCUSSION
We performed the producer consumer problem and found out that it is a multi-process
synchronization problem. We also came to know about the fact that producer produces the item
and enters them into the buffer and consumer removes the item from the buffer and consumes
them. We used semaphore to solve the problem of producer consumer.

CONCLUSION
Hence, we simulated producer consumer problem using semaphore.
TRIBHUVAN U NIVERSITY
EVEREST INNOVATIVE
COLLEGE
SOLTIMODE, KATHMANDU

NEPAL

Subject: Operating System


LAB REPORT ON
“PROGRAM TO SIMULATE AND FIND AVERAGE TAT & WAITING
TIME FOR PREEMPTIVE & NON PREEMPTIVE SCHEDULING
ALGORITHMS: FCFS, SJF, PRIORITY & ROUND-ROBIN”

Date:- ……………….. Experiment No:- 05


Submitted By:                Submitted To:  
Name: Apeksha Kafle             Department of BCA
Roll No: 1          (Er. Saurabh Karn; Lecturer/Supervison
Faculty: Humanities
Year/Part: II/I
LAB NO: 5

TITLE: “PROGRAM TO SIMULATE AND FIND AVERAGE TAT &


WAITING TIME FOR PREEMPTIVE & NON PREEMPTIVE
SCHEDULING ALGORITHMS: FCFS, SJF, PRIORITY & ROUND-
ROBIN”

OBJECTIVE: To simulate and find average tat and waiting time for preemptive and non-
preemptive scheduling algorithms: FCFS, SJF, Priority and Round Robin

BACKGROUND THEORY:
Turnaround Time: It is the time interval between the submission of a process and its
completion.
Waiting Time: It is the difference between turnaround time and burst time.

Preemptive Scheduling: It is a CPU scheduling technique that works by dividing time slots of


CPU to a given process. ... When the burst time of the process is greater than CPU cycle, it is
placed back into the ready queue and will execute in the next chance.
Non-preemptive Scheduling:  It is a CPU scheduling technique the process takes the resource
(CPU time) and holds it till the process gets terminated or is pushed to the waiting state. No
process is interrupted until it is completed, and after that processor switches to another process.

First Come, First Served (FCFS): It is also known as First In, First Out(FIFO) which is the
CPU scheduling algorithm in which the CPU is allocated to the processes in the order they are
queued in the ready queue.
Shortest Job First (SJF): It is an algorithm in which the process having the smallest execution
time is chosen for the next execution.
Priority Scheduling:  It is a method of scheduling processes that is based on priority. In this
algorithm, the scheduler selects the tasks to work as per the priority. The processes with
higher priority should be carried out first, whereas jobs with equal priorities are carried out on a
round-robin or FCFS basis.
Round robin: It is a CPU scheduling algorithm that is designed especially for time sharing
systems. It is more like a FCFS scheduling algorithm with one change that in Round Robin
processes are bounded with a quantum time size.
OBERVATIONS AND FINDINGS
Fcfs
Source code
Output
Sjf
Source code
Output

Round Robin
Source code
Output

Priotity
Source code
Output

DISCUSSION
We performed the Turnaround Time and Waiting Time. Also we talked about FCFS, SJF,
Priority and Round Robin methods for both preemptive and non-preemptive scheduling.

CONCLUSION
Hence, we simulated and found average tat and waiting time for preemptive and non-preemptive
scheduling algorithms: FCFS, SJF, Priority and Round Robin.
TRIBHUVAN U NIVERSITY
EVEREST INNOVATIVE
COLLEGE
SOLTIMODE, KATHMANDU

NEPAL

Subject: Operating System


LAB REPORT ON
“PROGRAM TO SIMULATE CONTIGIOUS MEMORY ALLOCATION
TECHNIQUES: WORST FIT, BEST FIT, & FIRST FIT”

Date:-………………….    Experiment No:- 06


Submitted By:                Submitted To:  
Name: Apeksha Kafle             Department of BCA
Roll No: 1          (Er. Saurabh Karn; Lecturer/Supervison
Faculty: Humanities
Year/Part: II/II
LAB NO: 6
TITLE: “PROGRAM TO SIMULATE CONTIGIOUS MEMORY ALLOCATION
TECHNIQUES: WORST FIT, BEST FIT, & FIRST FIT”

OBJECTIVES:
To simulate contiguous memory allocation techniques: Worst Fit, Best Fit and First Fit

BACKGROUND THEORY:
Contiguous memory allocation: It is basically a method in which a
single contiguous section/part of memory is allocated to a process or file needing it. ... We can
implement/achieve contiguous memory allocation by dividing the memory partitions into fixed
size partitions
Worst fit: allocates a process to the partition which is largest sufficient among the freely
available partitions available in the main memory. If a large process comes at a later stage, then
memory will not have space to accommodate it.

Best fit: It allocates the process to a partition which is the smallest sufficient partition among the
free available partitions.

First fit: In the first fit, the partition is allocated which is first sufficient from the top of main
memory.

OBSERVATION & FINDINGS:


Bestfit
Source code
Output
Firstfit
Source code
Output

Worstfit
Source code
Output

DISCUSSION: We performed various processes for memory allocation. We explored about


the techniques of worst fit, best fit and first fit. We found out that the first fit is faster than all
other. Best fit conserves available space for large allocations. Worst fit locates the largest available
free portion so that the portion left will be big enough to be useful. It is the reverse of best fit.

CONCLUSION: Hence, we simulated contiguous memory allocation techniques: Worst Fit,


Best Fit and First Fit
TRIBHUVAN U NIVERSITY
EVEREST INNOVATIVE
COLLEGE
SOLTIMODE, KATHMANDU

NEPAL

Subject: Operating System


LAB REPORT ON
“PROGRAM TO SIMULATE PAGE REPLACEMENT ALGORITHMS: FIFO, LRU, &
LFU”

Date:-………………….     Experiment No:- 07


Submitted By:                Submitted To:  
Name: Apeksha Kafle             Department of BCA
Roll No: 1          (Er. Saurabh Karn; Lecturer/Supervison
Faculty: Humanities
Year/Part: II/II

LAB NO: 7
TITLE: “PROGRAM TO SIMULATE PAGE REPLACEMENT ALGORITHMS: FIFO,
LRU, & LFU”

OBJECTIVES:
To simulate page replacement algorithms: FIFO, LRU and LFU
BACKGROUND THEORY:
Page replacement algorithm: It is needed to decide which page needs to be replaced when
new page comes in.
FIFO: In this algorithm, the operating system keeps track of all pages in the memory in a
queue, the oldest page is in the front of the queue. When a page needs to be replaced page in
the front of the queue is selected for removal.
LRU: In LRU, whenever page replacement happens, the page which has not been used for the
longest amount of time is replaced.
LFU: It is a caching algorithm in which the least frequently used cache block is removed
whenever the cache is overflowed .

OBSERVATION & FINDINGS:


Page replacement
Source code
Output
DISCUSSION: We talked over about page replacement algorithms. First In First Out, Least
Recently Used and Least Frequently Used were the algorithms we used for page replacement.
LRU is much more likely to keep the frequently-used items in memory. FIFO is simple and easy
to understand & implement.

CONCLUSION: Hence, we simulated page replacement algorithms: FIFO, LRU and LFU.
TRIBHUVAN U NIVERSITY
EVEREST INNOVATIVE
COLLEGE
SOLTIMODE, KATHMANDU

NEPAL

Subject: Operating System


LAB REPORT ON
“PROGRAM TO SIMULATE DISK SCHEDULING ALGORITHMS: FCFS, SCAN, & C-
SCAN”

Date:-………………..       Experiment No:- 08


Submitted By:                Submitted To:  
Name: Apeksha Kafle             Department of BCA
Roll No: 1          (Er. Saurabh Karn; Lecturer/Supervison
Faculty: Humanities
Year/Part: II/II
LAB NO: 8
TITLE: “PROGRAM TO SIMULATE DISK SCHEDULING ALGORITHMS: FCFS,
SCAN, & C-SCAN”

OBJECTIVES:
To simulate disk scheduling algorithms: FCFS, SCAN and C-SCAN

BACKGROUND THEORY:
Disk Scheduling Algorithms: Disk scheduling is done by operating systems to
schedule I/O requests arriving for the disk. Disk scheduling is also known as I/O
scheduling.
FCFS: The simplest form of disk scheduling is, of course, the first-come, first-served (FCFS)
algorithm. This algorithm is intrinsically fair, but it generally does not provide the fastest service.

SCAN: In the SCAN algorithm, the disk arm starts at one end of the disk and moves toward the
other end, servicing requests as it reaches each cylinder, until it gets to the other end of the disk.
At the other end, the direction of head movement is reversed, and servicing continues. The head
continuously scans back and forth across the disk. The SCAN algorithm is sometimes called the
elevator algorithm, since the disk arm behaves just like an elevator in a building, first servicing
all the requests going up and then reversing to service requests the other way.

CSCAN: Circular SCAN (C-SCAN) scheduling is a variant of SCAN designed to provide a


more uniform wait time. Like SCAN, C-SCAN moves the head from one end of the disk to the
other, servicing requests along the way. When the head reaches the other end, however, it
immediately returns to the beginning of the disk without servicing any requests on the return trip.
The C-SCAN scheduling algorithm essentially treats the cylinders as a circular list that wraps
around from the final cylinder to the first one.

OBSERVATION & FINDINGS:


Fcfs
Source code
Output
Scan
Source code
Output
C scan
Source code
Output
DISCUSSION: We performed various disk scheduling algorithms. FCFS is the simplest disk
scheduling algorithm. In FCFS eventually, each and every process gets a chance to execute, so
no starvation occurs Low variance Occurs in waiting time and response time. The waiting
time for the cylinders which were just visited by the head is reduced in C-SCAN compared to
the SCAN Algorithm and also uniform waiting time is provided.

CONCLUSION: Hence, we simulated disk scheduling algorithms: FCFS,SCAN AND C-


SCAN.

You might also like