1 - UNIT - I - Operating Ssystem Overview and Process

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 68

OPERATING SYSTEM

(16BT31501)

1. Operating Systems overview and Process Management

2. Synchronization and Deadlocks


3. Memory Management
4. Storage Management

5. I/O Systems and Protection

Dr. ASADI SRINIVASULU


Professor of IT
1
Unit - I
1. Operating systems, operations, Distributed systems, Special
purpose systems, Operating systems services, Systems calls,
2. Operating system structure.
3. Process Management: Process scheduling
4. Process Control Block
5. Inter process communication
6. Signals
7. Forks
8. Multithreading models, Threading issues
9. Scheduling criteria, Scheduling algorithms
10. Multilevel queue, Multilevel feedback queue

2
1. Operating systems, Operations, Distributed systems, Special
purpose systems, Operating systems services, Systems calls: An
operating system is system software that manages computer
hardware and software resources and provides common services
for computer programs.

 An operating system is a program that acts as an interface between


the software and the computer hardware.

 It is an integrated set of specialized programs used to manage


overall resources and operations of the computer.

 It is a specialized software that controls and monitors the


execution of all other programs that reside in the computer,
including application programs and other system software.
3
Fig: OS lies between Hardware and User
4
 Examples of operating systems: Common desktop operating
systems include:

1. Windows is Microsoft’s flagship operating system, the de facto


standard for home and business computers. Introduced in 1985,
the GUI-based OS has been released in many versions since then.

2. Mac OS is the operating system for Apple's Macintosh line of


personal computers and workstations.

3. Unix is a multi-user operating system designed for flexibility and


adaptability. Originally developed in the 1970s, Unix was one of
the first operating systems to be written in C language.

4. Linux is a Unix-like operating system that was designed to provide


personal computer users a free or very low-cost alternative.

5
 Characteristics of Operating System: Here is a list of some
of the most prominent characteristic features of Operating
Systems
1. Memory Management
2. Processor Management
3. Device Management
4. File Management

5. Security

6. Job Accounting
7. Control Over System Performance

8. Interaction with the Operators


9. Error-detecting Aids
10.Coordination Between Other Software and Users 6
7
 Types of operating systems: A mobile OS allows smartphones, tablet
PCs and other mobile devices to run applications and programs. Ex:
Apple iOS, Google Android, BlackBerry OS and Windows 10 Mobile. 

 An embedded operating system is specialized for use in the computers


built into larger systems, such as cars, traffic lights, digital televisions,
ATMs, airplane controls, point of sale (POS) terminals, digital
cameras, GPS navigation systems, elevators, digital media receivers
and smart meters.

 A network operating system (NOS) is a computer operating system that is


designed primarily to support workstation, personal computer, and, in
some instances, older terminals that are connected on a local area network
(LAN).

 A real-time operating system (RTOS) is an operating system that


guarantees a certain capability within a specified time constraint. For
example, an operating system might be designed to ensure that a certain
object was available for a robot on an assembly line.
8
9
 Operating System services for applications:

1. Multitasking operating system

2. Sharing of internal memory


3. It handles input and output
4. Interactive user system

5. Batch jobs
6. Parallel processing

7. System calls service

8. Hardware and Software service


10
Fig: Operating System Services 11
 Distributed computing is a field of computer science that studies
distributed systems. A distributed system is a system whose
components are located on different networked computers, which
communicate and coordinate their actions by passing messages to
one another.

12
 Systems calls: A system call is a way for programs to interact with
the operating system.
 A computer program makes a system call when it makes a request to
the operating system's kernel. System call provides the services of
the operating system to the user programs via Application Program
Interface(API).

13
Fig: System Calls 14
2. Operating System Structure: An operating system is a construct
that allows the user application programs to interact with
the system hardware.
 An easy way to do this is to create the operating system in parts.
Each of these parts should be well defined with clear inputs,
outputs and functions.

15
16
3) Process Management: Process is a program under execution is
called Process.

 The OS must allocate resources to processes, enable processes to


share and exchange information, protect the resources of each
process from other processes and enable synchronization among
processes.

 A process is a program in execution. For example, when we write a


program in C

 A process is an ‘active’ entity, as opposed to a program, which is


considered to be a ‘passive’ entity.

 A single program can create many processes when run multiple


times; for example, when we open a .exe or binary file multiple
times, multiple instances begin (multiple processes are created).
17
18
19
States of Process: A process is in one of the following states:

1. New: Newly Created Process (or) being-created process.


2. Ready: After creation process moves to Ready state, i.e. the
process is ready for execution.
3. Run: Currently running process in CPU (only one process at a time
can be under execution in a single processor).
4. Wait (or Block): When a process requests I/O access.
5. Complete (or Terminated): The process completed its execution.
6. Suspended Ready: When the ready queue becomes full, some
processes are moved to suspended ready state
7. Suspended Block: When waiting queue becomes full.

20
 A Process Scheduler schedules different processes to be assigned to
the CPU based on particular scheduling algorithms.

 There are six popular process scheduling algorithms


1. First-Come, First-Served (FCFS) Scheduling
2. Shortest-Job-First(SJF) Scheduling
3. Priority based Scheduling
4. Round Robin(RR) Scheduling

 These algorithms are either non-preemptive or preemptive.

 Non-preemptive algorithms are designed so that once a process


enters the running state, it cannot be preempted until it completes its
allotted time, whereas the preemptive scheduling is based on
priority where a scheduler may preempt a low priority running
process anytime when a high priority process enters into a ready
state.
21
1. First-Come, First-Served (FCFS) Scheduling: is an operating
system process scheduling algorithm and a network routing
management mechanism that automatically executes queued
requests and processes by the order of their arrival.
 Jobs are executed on first come, first serve basis.
 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

22
 Wait time of each process is as follows

Process Wait Time : Service Time - Arrival Time


P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13

 Average Wait Time: (0+4+6+13) / 4 = 5.75

23
2. Shortest Job First (SJF): Shortest job First (SJF), also known
as shortest job Next (SJN) or shortest process next (SPN) and it is It
is a Greedy Algorithm.

 SJF is a scheduling policy that selects for execution the waiting


process with the smallest execution time. 

 SJF is a non-preemptive, pre-emptive scheduling algorithm.


 SJF is a best approach to minimize waiting time.

 SJF is easy to implement in Batch systems where required CPU


time is known in advance.

 SJF impossible to implement in interactive systems where required


CPU time is not known.

 In SJF, the processer should know in advance how much time


process will take. 24
 Shortest Job first has the advantage of having minimum
average waiting time among all scheduling algorithms.

 It may cause starvation if shorter processes keep coming,


this problem can be solved using the concept of aging.

 SJF can be used in specialized environments where


accurate estimates of running time are available.

 Algorithm:
 1- Sort all the processes in increasing order according to
burst time.
 2- Then simply, apply FCFS.

25
26
27
 Given: Table of processes, and their Arrival time, Execution time

Process Arrival Time Execution Time Service Time


P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8

Waiting time of each process is as follows:

Process Waiting Time


P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25


28
3. Priority Based Scheduling: Priority scheduling is one of the most
common scheduling algorithms in batch systems.
 Each process is assigned a priority. Process with the highest priority
is to be executed first and so on.
 Processes with the same priority are executed on first come first
served basis.
 Priority can be decided based on memory requirements, time
requirements or any other resource requirement.

 Priority scheduling is a non-preemptive algorithm and one of the


most common scheduling algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is
to be executed first and so on.
 Processes with same priority are executed on first come first served
basis.
 Priority can be decided based on memory requirements, time
requirements or any other resource requirement.
29
 Given: Table of processes, and their Arrival time, Execution time,
and priority. Here we are considering 1 is the lowest priority.
Process Arrival Time Execution Priority Service Time
Time
P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5

 Waiting time of each process is as follows:


Process Waiting Time
P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2

 Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6 30


4. Round Robin Scheduling: Round Robin is the preemptive
process scheduling algorithm. Each process is provided a fix time to
execute, it is called a quantum. Once a process is executed for a
given time period, it is preempted and other process executes for a
given time period.
 Round Robin is the preemptive process scheduling algorithm.
 Each process is provided a fix time to execute, it is called
a quantum.
 Once a process is executed for a given time period, it is preempted
and other process executes for a given time period.
 Context switching is used to save states of preempted processes.

31
 Wait time of each process is as follows 

Process Wait Time : Service Time - Arrival Time

P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11

 Average Wait Time: (9+2+12+11) / 4 = 8.5

32
4) Process Control Block: Process Control Block (PCB, also called
Task Controlling Block, Entry of the Process Table, Task Struct, or
Switchframe) is a data structure in the operating system kernel
containing the information needed to manage the scheduling of a
particular process.
 Process Control Block is a data structure that contains information
of the process related to it. The process control block is also known
as a task control block, entry of the process table, etc.
 It is very important for process management as the data structuring
for processes is done in terms of the PCB. It also defines the
current state of the operating system.
 Structure of the Process Control Block is the process control stores
many data items that are needed for efficient process management.
Some of these data items are explained with the help of the given
diagram:
33
34
The following are the data items:
1. Process State: This specifies the process state i.e. new, ready,
running, waiting or terminated.
2. Process Number: This shows the number of the particular process.
3. Program Counter : This contains the address of the next

instruction that needs to be executed in the process.


4. Registers: This specifies the registers that are used by the process.
They may include accumulators, index registers, stack pointers,
general purpose registers etc.
5. List of Open Files: These are the different files that are associated

with the process

35
6. CPU Scheduling Information: The process priority, pointers to
scheduling queues etc. is the CPU scheduling information that is
contained in the PCB. This may also include any other scheduling
parameters.
7. Memory Management Information: The memory management
information includes the page tables or the segment tables
depending on the memory system used. It also contains the value
of the base registers, limit registers etc.
8. I/O Status Information: This information includes the list of I/O
devices used by the process, the list of files etc.
9. Accounting information: The time limits, account numbers,
amount of CPU used, process numbers etc. are all a part of the
PCB accounting information.
10. Location of the Process Control Block: The process control block
is kept in a memory area that is protected from the normal user
access. This is done because it contains important process
information.
36
5) Inter Process Communication (IPC) is a mechanism which
allows processes to communicate each other and synchronize their
actions. The communication between these processes can be seen
as a method of co-operation between them. Processes can
communicate with each other using these two ways: Shared
Memory and Message passing.

 Inter Process Communication (IPC) is a mechanism that involves


communication of one process with another process, this usually
occurs only in one system.

 Communication can be of two types


i. Between related processes initiating from only one process, such
as parent and child processes.
ii. Between unrelated processes, or two or more different processes.
37
38
Inter Process Communication consists of

Pipes − Communication between two related processes. The


mechanism is half duplex meaning the first process communicates
with the second process. To achieve a full duplex i.e., for the
second process to communicate with the first process another pipe
is required.

FIFO − Communication between two unrelated processes. FIFO is


a full duplex, meaning the first process can communicate with the
second process and vice versa at the same time.

Message Queues − Communication between two or more


processes with full duplex capacity. The processes will
communicate with each other by posting a message and retrieving
it out of the queue. Once retrieved, the message is no longer
available in the queue.
39
Shared Memory − Communication between two or more
processes is achieved through a shared piece of memory among all
processes. The shared memory needs to be protected from each
other by synchronizing access to all the processes.

Semaphores − Semaphores are meant for synchronizing access to


multiple processes. When one process wants to access the memory
(for reading or writing), it needs to be locked (or protected) and
released when the access is removed. This needs to be repeated by
all the processes to secure data.

Signals − Signal is a mechanism to communication between


multiple processes by way of signaling. This means a source
process will send a signal (recognized by number) and the
destination process will handle it accordingly.
40
6) Signals: Signals are a limited form of inter-process communication
(IPC), typically used in Unix, Unix-like, and other POSIX-
compliant operating systems.

A signal is an asynchronous notification sent to a process or to a


specific thread within the same process in order to notify it of an
event that occurred.

Signals are software interrupts sent to a program to indicate that an


important event has occurred.

The events can vary from user requests to illegal memory access
errors. Some signals, such as the interrupt signal, indicate that a
user has asked the program to do something that is not in the usual
flow of control.
41
SIGNAL
SIGNAL
NUMBER DESCRIPTION
NAME

Hang up detected on controlling terminal or death of


SIGHUP 1 controlling process

SIGINT 2 Issued if the user sends an interrupt signal (Ctrl + C)

SIGQUIT 3 Issued if the user sends a quit signal (Ctrl + D)

Issued if an illegal mathematical operation is attempted


SIGFPE 8
If a process gets this signal it must quit immediately
SIGKILL 9 and will not perform any clean-up operations

SIGALRM 14 Alarm clock signal (used for timers)

SIGTERM 15 Software termination signal (sent by kill by default)


42
 Default Actions: Every signal has a default action
associated with it.
 The default action for a signal is the action that a script or
program performs when it receives a signal.
 There is an easy way to list down all the signals supported
by your system. Just issue the kill -l command and it
would display all the supported signals.
 Some of the possible default actions are
 Terminate the process.
 Ignore the signal.
 Dump core. This creates a file called core containing the
memory image of the process when it received the signal.
 Stop the process.
 Continue a stopped process.
44
7) Forks: Fork is an operation whereby a process creates a copy of
itself. It is usually a system call, implemented in the kernel. Fork is
the primary (and historically, only) method of process creation on
Unix-like operating systems.
 Fork system call use for creates a new process, which is
called child process, which runs concurrently with process (which
process called system call fork) and this process is called parent
process. After a new child process created, both processes will
execute the next instruction following the fork() system call. A
child process uses the same pc(program counter), same CPU
registers, same open files which use in the parent process.

 It takes no parameters and returns an integer value.


 Below are different values returned by fork().
i. Negative Value: creation of a child process was unsuccessful.
ii. Zero: Returned to the newly created child process.
iii. Positive value: Returned to parent or caller. The value contains
process ID of newly created child process. 45
Fig: Fork( ) function process 46
8) Multithreading models, Threading issues:  Multithreading allows the execution
of multiple parts of a program at the same time. These parts are known as threads
and are lightweight processes available within the process. Therefore,
multithreading leads to maximum utilization of the CPU by multitasking. The
main models for multithreading are one to one model, many to one model and
many to many model.
1. One to One Model: The one to one model maps each of the user threads to a
kernel thread. This means that many threads can run in parallel on
multiprocessors and other threads can run when one thread makes a blocking
system call. A disadvantage of the one to one model is that the creation of a user
thread requires a corresponding kernel thread. Since a lot of kernel threads
burden the system, there is restriction on the number of threads in the system.

47
2. Many to Many Model: In this model, we have multiple user
threads multiplex to same or lesser number of kernel level threads.
Number of kernel level threads are specific to the machine,
advantage of this model is if a user thread is blocked we can
schedule others user thread to other kernel thread. Thus, System
doesn’t block if a particular thread is blocked.

48
3. Many to One Model: In this model, we have multiple user threads
mapped to one kernel thread. In this model when a user thread
makes a blocking system call entire process blocks. As we have
only one kernel thread and only one user thread can access kernel at
a time, so multiple threads are not able access multiprocessor at the
same time.

49
Model Questions
1. State and Explain different operating system services. An
operating system is system software that manages computer
hardware, software resources, and provides common services for
computer programs.

50
51
2. Differentiate between sequential and batch processing

52
3. Bootstrap Program: A bootstrap is the program that initializes the operating
system (OS) during startup. The term bootstrap or bootstrapping originated in
the early 1950s. It referred to a bootstrap load button that was used to initiate a
hardwired bootstrap program, or smaller program that executed a
larger program such as the OS.

53
4. What is meant by CPU Scheduling? Explain different
scheduling algorithms with examples: CPU scheduling is a
process which allows one process to use the CPU while the
execution of another process is on hold(in waiting state) due to
unavailability of any resource like I/O etc, thereby making full use
of CPU. The aim of CPU scheduling is to make the system
efficient, fast and fair.

54
5. Illustrate peterson’s solution to critical section problem :
Peterson's Solution is a classical software based solution to
the critical section problem. In Peterson's solution, we have two
shared variables: boolean flag[i] :Initialized to FALSE, initially no
one is interested in entering the critical section. int turn : The
process whose turn is to enter the critical section.

55
6. What is Belady’s Anomaly? Explain with an example:
Bélády's anomaly is the phenomenon in which increasing the
number of page frames results in an increase in the number of page
faults for certain memory access patterns. This phenomenon is
commonly experienced when using the first-in first-out (FIFO)
page replacement algorithm.

56
7. Write about the performance of demand paging: Demand
paging (as opposed to anticipatory paging) is a method of virtual
memory management. ... It follows that a process begins execution
with none of its pages in physical memory, and many page
faults will occur until most of a process's working set of
pages are located in physical memory.

57
8. What are access matrices? Explain its implementation: Access
Matrix is a security model of protection state in computer system.
It is represented as a matrix. Access matrix is used to define the
rights of each process executing in the domain with respect to each
object. The rows of matrix represent domains and columns
represent objects.

58
9. Mention some classical problems of synchronization. Explain
any two of them in detail: The operation or activity of two or
more things at the same time or rate.

 Classical problems of Synchronization with Semaphore Solution


1. Bounded-buffer (or Producer-Consumer) Problem: Bounded
Buffer problem is also called producer consumer problem
2. Dining - Philosphers Problem
3. Readers and Writers Problem
4. Sleeping Barber Problem

59
10. Explain the goals and principles of protection: Protection refers
to a mechanism which controls the access of programs, processes,
or users to the resources defined by a computer system. We can
take protection as a helper to multi programming operating
system, so that many users might safely share a common logical
name space such as directory or files.

60
11. Differentiate between preemptive and non-preemptive
scheduling

61
12. Illustrate the purpose of fork system call with an example:
fork is an operation whereby a process creates a copy of itself. It is
usually a system call, implemented in the kernel. Fork is the
primary (and historically, only) method of process creation on
Unix-like operating systems.

62
13. Explain briefly about Inter Process Communication: IPC is a
mechanism which allows processes to communicate each other
and synchronize their actions. The communication between
these processes can be seen as a method of co-operation between
them. Processes can communicate with each other using these
two ways: Shared Memory. Message passing.

63
14. Discuss how Reader’s Writer’s problem can be solved using
semaphores: Semaphores can be used to restrict access to the
database under certain conditions. In this
example, semaphores are used to prevent any writing processes
from changing information in the database while other processes
are reading from the database.

64
15. What are semaphores? Explain Binary and Counting
semaphores with an example: Semaphores are integer variables
that are used to solve the critical section problem by using two
atomic operations, wait and signal that are used for process
synchronization.

65
66
16. Elucidate the concept of RACE condition? Explain Reader’s-
Writer’s problem with semaphore in detail: A race condition is
an undesirable situation that occurs when a device or system
attempts to perform two or more operations at the same time, but
because of the nature of the device or system, the operations must
be done in the proper sequence to be done correctly.

67
17. Explain the benefit of using role-based access control: here are a
number of benefits to using RBAC to restrict unnecessary
network access based on people's roles within an organization,
including:
1. Improving operational efficiency
2. Enhancing compliance
3. Giving administrators increased visibility
4. Reducing costs
5. Decreasing risk of breaches and data leakage

68

You might also like