Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 5

Deadlock :-

1)A deadlock in an operating system occurs when two or more processes are unable to
proceed because each is waiting for the other to release a resource, creating a
circular dependency.
2)This results in a standstill where none of the processes can progress.

Necessary Conmditions:-
1)Mutual Exclusion:
At least one resource must be held in a non-shareable mode,means only one process
can use it at a time.

2)Hold and Wait:


Processes must hold at least one resource while waiting for additional resources
acquired by other processes.

3)No Preemption:
You can't forcefully take resources from a process;the process has to willingly
release them.

4)Circular Wait:
There must exist a circular chain of two or more processes,each waiting for a
resource held by the next one in the chain.

Safe Seqeunce:-
1]A safe sequence in operating systems is an order in which processes can complete
their execution without causing a deadlock or resource contention.
2]It ensures that each process gets the resources it needs without waiting
indefinitely,maintaining system stability and avoiding deadlock situations.

Claim Edge:-
1]A claim edge represents a request made by a process for a particular resource.

Request Edge:-
1]A request edge represents an immediate demand by a process for a specific
resource.

Resource Allocation Graph(RAG):-


1]A Resource Allocation Graph(RAG) is a visual representation used to track
resource allocation and detect potential deadlocks.
2]By analyzing this graph,the operating system can determine if a deadlock may
occur and take appropriate actions to prevent it.

Bankers Algorithm for deadlock avoidance:-


1]The Banker's Algorithm in operating systems is a method to avoid deadlock by
ensuring that processes only request resources when there's a safe sequence to
satisfy them.
2]It works by keeping track of available resources,the maximum resources each
process can claim,and the resources currently allocated to each process.
3]By analyzing this information, the system can give resources to processes only if
it won't cause a deadlock, which keeps the system stable and running smoothly.

Deadlock Detection Algorithm:-


1]Deadlock Detection Algorithm is a method used to identify if a deadlock has
occurred in the system.
2]It works by examining the allocation of resources among processes and checking
for circular wait conditions.
3]Once a deadlock is detected,appropriate actions can be taken to resolve it,such
as terminating processes or releasing resources.
4]This helps to ensure the system can continue functioning without being stuck in a
deadlock state.

Deadlock Characterization:-
1]Deadlock characterization involves understanding the conditions that must be
present for a deadlock to occur in a system.
2]There are four necessary conditions for deadlock:-

[1]Deadlock Prevention:-
1]Eliminate the mutual exclusion condition by allowing resources to be shared.
2]It Require processes to request and hold all needed resources at once,or release
resources before requesting new ones.

[2]Deadlock Avoidance:
1]Use algorithms to dynamically analyze resource requests and releases to ensure
that the system remains in a safe state,avoiding deadlock.

[3]Deadlock Detection:-
1]These are transaction-based schemes in database systems where transactions are
categorized based on their timestamps to decide whether to wait or abort.
2]Representing resource allocation relationships between processes and
resources,the detection involves cycle detection in the graph.

[4]Recovery From Deadlock:-


1]Deadlock recovery in operating systems involves detecting deadlocks,breaking them
by preempting resources or terminating processes,and restoring system functionality
to resume normal operation.

Resource Preemptions:-
1]Resource preemption involves forcibly retrieving resources from a process that
currently holds them.
2]It interrupts the ongoing usage of resources by a process to reallocate them to
another process that needs them urgently.
3]Preemption helps prevent deadlock situations by ensuring that resources can be
repossessed and reallocated dynamically, thereby avoiding resource contention.

What is the role of MAX and NEED array in bankers algorithm:-


1]In the Banker's Algorithm in operating systems,the MAX array represents the
maximum number of resources each process may need to complete its execution.
2]The NEED array shows the remaining resources that a process still needs to finish
its task.
3]These arrays help figure out if we can give out resources in a way that won't get
us stuck in a deadlock.
4]When we look at how much stuff each process wants(MAX) and what they still need
(NEED),we can decide who gets what without risking a deadlock.

Process Synchronization:-
1]Process synchronization in operating systems is about coordinating the activities
of multiple processes to ensure they work together smoothly and without conflicts.
2]It involves implementing mechanisms to control the access to shared resources,so
that processes can communicate,cooperate,and avoid interfering with each other's
execution.

Memory management:-
1]Memory Management in operating systems involves organizing and allocating
computer memory effectively to ensure that programs and processes can run smoothly.

2]It includes tasks such as keeping track of which parts of memory are in use and
which are available,allocating memory to processes when they need it,and
deallocating memory when processes no longer require it.
3]This helps optimize the use of available memory resources and prevents issues
like memory leaks or fragmentation,ensuring efficient operation of the system.

[1]FIFO:-
1)FIFO stands for First-In-First-Out.
2)FIFO in operating systems is a scheduling algorithm that works like a queue.
3)In FIFO(First-In-First-Out) scheduling algorithm,tasks are added to the end of
the queue and removed from the front.

[2]Optimal:-
1)The optimal algorithm in operating systems,also known as OPT or MIN,is a
theoretical page replacement algorithm.
2)It selects the page to remove from memory based on the page that will not be used
for the longest time in the future.

LRU:
1)LRU stands for Least Recently Used.
2)The LRU(Least Recently Used) algorithm in operating systems is a page replacement
strategy that removes the page from memory that has not been used for the longest
time.
3)It works on the principle that pages that have been least recently accessed are
less likely to be used in the near future.

LFU:-
1)LFU stands for Least Frequently Used.
2)The LFU(Least Frequently Used) algorithm in operating systems is a page
replacement strategy that removes the page from memory that has been accessed the
least number of times.
3)It works on the principle that pages that have been used least frequently are
less likely to be used again in the future.

MFU:-
1]MFU stands for Most Frequently Used.
2)The MFU(Most Frequently Used) algorithm in operating systems is a page
replacement strategy that removes the page from memory that has been accessed the
most number of times.
3)It works on the principle that pages that have been used most frequently are less
likely to be used again in the future.

Page Fault:-
1]In operating systems,a page fault occurs when a program or process requests data
from a page in memory that is not currently in RAM (Random Access Memory).
2]This triggers the operating system to fetch the required page from secondary
storage,such as a hard disk,into RAM so that the program can access it.

CPU Scheduling:-
1]CPU scheduling in an operating system is like managing a queue of tasks waiting
to use the processor.
2]It decides which process gets to use the CPU and for how long.
3]CPU scheduling helps optimize CPU usage,minimize response times,and ensure
efficient task execution.

CPU Schedular:-
1]The CPU scheduler in an operating system is a component responsible for selecting
and allocating CPU time to different processes running on the system.
2]It determines the order in which processes are executed on the CPU,balancing
performance and fairness among competing tasks.
Preemptive Scheduling:-
1]Preemptive shceduling in operating systems is a method where the operating system
can interrupt the execution of a process to allow another process to run.
2]This interruption typically occurs when a higher-priority task becomes ready to
execute or when a time slice allocated to a process expires.

Non-Preemptive Scheduling:-
1]Non-preemptive scheduling in operating systems is a method where a process keeps
running until it completes its task or voluntarily gives up the CPU.
2]Unlike preemptive scheduling,the operating system does not interrupt the process
to allow another process to run until the currently running process finishes its
execution.

FCFS Algorithm:-
1]The FCFS(First-Come-First-Served) algorithm in operating systems is like waiting
in line at a grocery store.
2]It processes tasks based on the order they arrive.
3]The first task to arrive is the first one to be processed by the CPU.

SJF:-
1]The SJF stand for Shortest Job First.
2]It processes tasks based on their burst time,which is the time needed for a task
to complete its execution.
3]The shortest task is served first,followed by the next shortest,and so on.

Burst Time:-
1]Burst Time in operating systems refers to the amount of time a process or task
takes to execute from start to finish on the CPU.
2]Burst time helps the operating system schedule processes efficiently by
estimating how long each process will need to occupy the CPU before completing its
execution.

Turnaround Time:-
1]Turnaround time in operating systems is the total time taken for a process to
complete its execution,starting from the moment it is submitted to the system until
it finishes executing and exits.
2] Turnaround time includes both the waiting time(time spent waiting in the ready
queue) and the execution time(time spent running on the CPU).

Waiting Time:-
1]Waiting time in operating systems refers to the amount of time a process spends
waiting in the ready queue before it gets a chance to run on the CPU.
2]Waiting time is important because it shows how long a process waits before it can
start running.
3]Minimizing waiting time makes the system work faster and more smoothly.

Average Waiting Time:-


1]The average waiting time in operating systems is the average amount of time that
processes spend waiting in the ready queue before getting a chance to run on the
CPU.
2]Average waiting time is a key measure of the efficiency of a scheduling
algorithm,with lower values indicating better performance and faster task
execution.

Average Turnaround Time:-


1]The average turnaround time in operating systems is the average time taken for
processes to complete their execution,from submission to termination.
2]Average turnaround time includes both the time spent waiting in the ready queue
and the time spent executing on the CPU. Lower average turnaround time values
indicate more efficient scheduling and faster task completion.

File:-
1]In operating systems,a file is a collection of related data or information that
is stored together and given a name.
2]It is like a digital document or a container for organizing data such as
text,images,or programs.
3]Files can be created,opened,read,written and closed by programs,allowing users to
store and access information in a structured manner.

You might also like