Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

Always Asked Operating

System Interview Questions


with Answers

CONNECT
https://rsinghal26.github.io/?contact
Data tructure Algorithm Intervie Preparation Topic-ie Practice C++ Java Pthon

Commonl Aked Operating tem

Intervie Quetion

Difficult Level : Medium ● Lat Updated : 24 Nov, 2021

1. What are a proce and proce tale?

A proce i an intance of a program in execution. For example, a We

roer i a proce, a hell (or command prompt) i a proce. The

operating tem i reponile for managing all the procee that are

running on a computer and allocate each proce a cer tain amount of

time to ue the proceor. In addition, the operating tem alo

allocate variou other reource that procee ill need, uch a

computer memor  or dik. To keep track of the tate of all the

procee, the operating tem maintain a tale knon a the

proce tale. Inide thi tale, ever  proce i lited along ith the

reource the proce i uing and the current tate of the proce.

2. What are the di몭erent tate of the proce?

Procee can e in one of three tate: running, read, or aiting. The

running tate mean that the proce ha all the reource it need for

execution and it ha een given permiion  the operating tem to

ue the proceor. Onl one proce can e in the running tate at an

given time. The remaining procee are either in a aiting tate (i.e.,

aiting for ome external event to occur uch a uer input or dik

acce) or a read tate (i.e., aiting for permiion to ue the

proceor). In a real operating tem, the aiting and read tate are
implemented a queue that hold the procee in thee tate.

3. What i a Thread?

tart Your Coding Journe No! Login Regiter

A thread i a ingle equence tream ithin a proce. ecaue thread

have ome of the proper tie of procee, the are ometime called

lighteight procee. Thread are a popular a to improve the

application through parallelim. For example, in a roer, multiple

ta can e di몭erent thread. M ord ue multiple thread, one

thread to format the text, another thread to proce input, etc.

4. What are the di몭erence eteen proce and thread?

A thread ha it on program counter (PC), a regiter et, and a tack

pace. Thread are not independent of one another, like procee. A  a

reult, thread hare ith other thread their code ection, data ection,

and O reource like open 몭le and ignal.

5.What are the ene몭t of multithreaded programming?

It make the tem more reponive and enale reource haring. It

lead to the ue of multiproce architecture. It i more economical and

preferred.

6. What i Thrahing?

Thrahing i a ituation hen the per formance of a computer degrade

or collape. Thrahing occur hen a tem pend more time


proceing page fault than executing tranaction. While proceing

page fault i necear  in order to appreciate the ene몭t of vir tual

memor , thrahing ha a negative e몭ect on the tem. A  the page

fault rate increae, more tranaction need proceing from the

paging device. The queue at the paging device increae, reulting in

increaed er vice time for a page fault.

7. What i elad ’ Anomal?

élád ’ anomal i an anomal ith ome page replacement policie

ere increaing the numer of page frame reult in an increae in the

numer of page fault. It occur hen Firt in Firt Out page

replacement i ued.

8. What happen if a non-recurive mutex i locked more than once.

Deadlock. If a thread that had alread locked a mutex, trie to lock the

mutex again, it ill enter into the aiting lit of that mutex, hich

reult in a deadlock. It i ecaue no other thread can unlock the

mutex. An operating tem implementer can exercie care in

identif ing the oner of the mutex and return it if it i alread locked 

the ame thread to prevent deadlock.

9. xplain the main purpoe of an operating tem?

An operating tem act a an intermediar  eteen the uer of a

computer and computer hardare. The purpoe of an operating tem

i to provide an environment in hich a uer can execute program

convenientl and e몭cientl.

An operating tem i oftare that manage computer hardare. The

hardare mut provide appropriate mechanim to enure the correct

operation of the computer tem and to prevent uer program from


inter fering ith the proper operation of the tem.

10. What i demand paging?

The proce of loading the page into memor  on demand (henever

page fault occur) i knon a demand paging.

11. What are the advantage of a multiproceor tem?

There are ome main advantage of a multiproceor tem:

nhanced per formance.

Multiple application.

Multi-taking inide an application.

High throughput and reponivene.

Hardare haring among CPU.

12. What i kernel?

A kernel i the central component of an operating tem that manage

the operation of computer and hardare. It aicall manage

operation of memor  and CPU time. It i a core component of an

operating tem. Kernel act a a ridge eteen application and

data proceing per formed at the hardare level uing inter-proce

communication and tem call.

13. What are real-time tem?

A real-time tem mean that the tem i ujected to real-time, i.e.,

the repone hould e guaranteed ithin a peci몭ed timing contraint

or the tem hould meet the peci몭ed deadline.

14.What are the di몭erent cheduling algorithm?

Firt-Come, Firt-er ved (FCF) cheduling.

hor tet-Jo-Next ( JN) cheduling.

Priorit cheduling.

hor tet Remaining Time.

Round Roin(RR) cheduling.


Multiple-Level Queue cheduling.

15. What i a deadlock?

Deadlock i a ituation hen to or more procee ait for each other

to 몭nih and none of them ever 몭nih. Conider an example hen to

train are coming toard each other on the ame track and there i onl

one track, none of the train can move once the are in front of each

other. A imilar ituation occur in operating tem hen there are

to or more procee that hold ome reource and ait for reource

held  other().

16. What i vir tual memor ?

Vir tual memor  create an illuion that each uer ha one or more

contiguou addre pace, each eginning at addre zero. The ize

of uch vir tual addre pace are generall ver  high. The idea of

vir tual memor  i to ue dik pace to extend the R AM. Running

procee don’t need to care hether the memor  i from R AM or dik.

The illuion of uch a large amount of memor  i created  udividing

the vir tual memor  into maller piece, hich can e loaded into

phical memor  henever the are needed  a proce.

17. Decrie the ojective of multi-programming.

Multi-programming increae CPU utilization  organizing jo (code

and data) o that the CPU ala ha one to execute. The main ojective

of multi-programming i to keep multiple jo in the main memor . If

one jo get occupied ith IO, CPU can e aigned to other jo.
18. What i the time-haring tem?

Time-haring i a logical extenion of multiprogramming. The CPU

per form man tak  itche that are o frequent that the uer can

interact ith each program hile it i running. A time-hared operating

tem allo multiple uer to hare computer imultaneoul.

19. What i a thread?

A thread i a path of execution ithin a proce. A proce can contain

multiple thread.

20. Give ome ene몭t of multithreaded programming?

A thread i alo knon a lighteight proce. The idea i to achieve

parallelim  dividing a proce into multiple thread. Thread ithin

the ame proce run in hared memor  pace,

21. rie몭 explain FCF?

FCF tand for Firt Come Firt er ve. In the FCF cheduling

algorithm, the jo that arrived 몭rt in the read queue i allocated to the

CPU and then the jo that came econd and o on. FCF i a non-

preemptive cheduling algorithm a a proce that hold the CPU until it

either terminate or per form I/O. Thu, if a longer jo ha een

aigned to the CPU then man hor ter jo after it ill have to ait.

22. What i the RR cheduling algorithm?

A round-roin cheduling algorithm i ued to chedule the proce

fairl for each jo a time lot or quantum and interrupting the jo if it i

not completed  then the jo come after the other jo hich i arrived

in the quantum time that make thee cheduling fairl.


Round-roin i cclic in nature, o tar vation doen’t occur

Round-roin i a variant of 몭rt come, 몭rt er ved cheduling

No priorit, pecial impor tance i given to an proce or tak

RR cheduling i alo knon a Time licing cheduling

23. What are the necear  condition hich can lead to a deadlock in

a tem?

Mutual xcluion: There i a reource that cannot e hared.

Hold and Wait: A proce i holding at leat one reource and aiting for

another reource, hich i ith ome other proce.

No Preemption: The operating tem i not alloed to take a reource

ack from a proce until the proce give it ack.

Circular Wait: A et of procee are aiting for each other in circular

form.

24. numerate the di몭erent R AID level?

A redundant arra of independent dik i a et of everal phical dik

drive that the operating tem ee a a ingle logical unit. It plaed a

igni몭cant role in narroing the gap eteen increaingl fat

proceor and lo dik drive. R AID ha di몭erent level:

Level-0

Level-1

Level-2

Level-3

Level-4

Level-5

Level-6
25. What i anker ’ algorithm?

The anker ’ algorithm i a reource allocation and deadlock avoidance

algorithm that tet for afet  imulating the allocation for

predetermined maximum poile amount of all reource, then

make an “-tate” check to tet for poile activitie, efore deciding

hether allocation hould e alloed to continue.

26. What factor determine hether a detection algorithm mut e

utilized in a deadlock avoidance tem?

One i that it depend on ho often a deadlock i likel to occur under

the implementation of thi algorithm. The other ha to do ith ho

man procee ill e a몭ected  deadlock hen thi algorithm i

applied.

27. tate the main di몭erence eteen logical and phical addre

pace?

Parameter LOGICAL ADDR PHYICAL ADDR

aic generated  CPU. location in a memor  unit.

Addre Logical Addre pace i a et Phical Addre i a et of

pace of all logical addree all phical addree

generated  the CPU in mapped to the correponding

reference to a program. logical addree.

Viiilit Uer can vie the logical Uer can never vie the

addre of a program. phical addre of the

program.

Generation generated  the CPU. Computed  MMU.

Acce The uer can ue the logical The uer can indirectl

addre to acce the phical acce phical addre ut

addre. not directl.


28. Ho doe dnamic loading aid in etter memor  pace utilization?

With dnamic loading, a routine i not loaded until it i called. Thi

method i epeciall ueful hen large amount of code are needed in

order to handle infrequentl occurring cae uch a error routine.

29. What are overla?

The concept of overla i that henever a proce i running it ill not

ue the complete program at the ame time, it ill ue onl ome par t of

it. Then overla concept a that hatever par t ou required, ou load

it and once the par t i done, then ou jut unload it, hich mean jut

pull it ack and get the ne par t ou required and run it. Formall, “ The

proce of tranferring a lock of program code or other data into

internal memor , replacing hat i alread tored”.

30. What i fragmentation?

Procee are tored and remove from memor , hich make free

memor  pace, hich i too little to even conider utilizing  di몭erent

procee. uppoe, that proce i not read to dipene to memor 

lock ince it little ize and memor  hinder conitentl ta unued

i called fragmentation. Thi kind of iue occur during a dnamic

memor  allotment frameork hen free lock are mall, o it can’t

atif  an requet.

31. What i the aic function of paging?

Paging i a method or technique hich i ued for non-contiguou

memor  allocation. It i a 몭xed-ize par titioning theme (cheme). In

paging, oth main memor  and econdar  memor  are divided into

equal 몭xed-ize par tition. The par tition of the econdar  memor 
area unit and the main memor  area unit are knon a page and

frame repectivel.

Paging i a memor  management method accutomed fetch procee

from the econdar  memor  into the main memor  in the form of page.

in paging, each proce i plit into par t herever the ize of ever  par t

i the ame a the page ize. The ize of the lat half could alo e ut

the page ize. The page of the proce area unit hold on ithin the

frame of main memor  reling upon their acceiilit

32. Ho doe apping reult in etter memor  management?

apping i a imple memor /proce management technique ued 

the operating tem(o) to increae the utilization of the proceor 

moving ome locked procee from the main memor  to the

econdar  memor  thu forming a queue of the temporaril upended

procee and the execution continue ith the nel arrived proce.

During regular inter val that are et  the operating tem, procee

can e copied from the main memor  to a acking tore, and then

copied ack later. apping allo more procee to e run that can 몭t

into memor  at one time

33. Write a name of claic nchronization prolem?

ounded-u몭er

Reader-riter

Dining philoopher

leeping arer

34. What i the Direct Acce Method?

The direct Acce method i aed on a dik model of a 몭le, uch that it

i vieed a a numered equence of lock or record. It allo

aritrar  lock to e read or ritten. Direct acce i advantageou

hen acceing large amount of information. Direct memor  acce

(DMA) i a method that allo an input/output (I/O) device to end or

receive data directl to or from the main memor , paing the CPU to
peed up memor  operation. The proce i managed  a chip knon

a a DMA controller (DMAC).

35. When doe thrahing occur?

Thrahing occur hen procee on the tem frequentl acce

page not availale memor .

36. What i the et page ize hen deigning an operating tem?

The et paging ize varie from tem to tem, o there i no ingle

et hen it come to page ize. There are di몭erent factor to conider

in order to come up ith a uitale page ize, uch a page tale, paging

time, and it e몭ect on the overall e몭cienc of the operating tem.

37. What i multitaking?

Multitaking i a logical extenion of a multiprogramming tem that

uppor t multiple program to run concurrentl. In multitaking, more

than one tak i executed at the ame time. In thi technique, the

multiple tak, alo knon a procee, hare common proceing

reource uch a a CPU.

38. What i caching?

The cache i a maller and fater memor  that tore copie of the data

from frequentl ued main memor  location. There are variou

di몭erent independent cache in a CPU, hich tore intruction and

data. Cache memor  i ued to reduce the average time to acce data

from the Main memor .

39. What i pooling?

pooling refer to imultaneou peripheral operation online, pooling

refer to putting jo in a u몭er, a pecial area in memor , or on a dik


here a device can acce them hen it i read. pooling i ueful

ecaue device acce data at di몭erent rate.

40. What i the functionalit of an A emler?

The A emler i ued to tranlate the program ritten in A eml

language into machine code. The ource program i an input of an

aemler that contain aeml language intruction. The output

generated  the aemler i the oject code or machine code

undertandale  the computer.

41. What are interrupt?

The interrupt are a ignal emitted  hardare or oftare hen a

proce or an event need immediate attention. It aler t the proceor

to a high-priorit proce requiring interruption of the current orking

proce. In I/O device one of the u control line i dedicated for thi

purpoe and i called the Interrupt er vice Routine (IR).

42. What i GUI?

GUI i hor t for Graphical Uer Inter face. It provide uer ith an

inter face herein action can e per formed  interacting ith icon

and graphical mol.

43. What i preemptive multitaking?

Preemptive multitaking i a tpe of multitaking that allo computer

program to hare operating tem (O) and underling hardare

reource. It divide the overall operating and computing time eteen

procee, and the itching of reource eteen di몭erent procee

occur through prede몭ned criteria.

44. What i a pipe and hen i it ued?


A Pipe i a technique ued for inter-proce communication. A pipe i a

mechanim  hich the output of one proce i directed into the input

of another proce. Thu it provide a one-a 몭o of data eteen

to related procee.

45. What are the advantage of emaphore?

The are machine-independent.

a to implement.

Correctne i ea to determine.

Can have man di몭erent critical ection ith di몭erent emaphore.

emaphore acquire man reource imultaneoul.

No ate of reource due to u aiting.

46. What i a oottrap program in the O?

oottrapping i the proce of loading a et of intruction hen a

computer i 몭rt turned on or ooted. During the tar tup proce,

diagnotic tet are per formed, uch a the poer-on elf-tet (POT),

that et or check con몭guration for device and implement routine

teting for the connection of peripheral, hardare, and external

memor  device. The ootloader or oottrap program i then loaded to

initialize the O.

47. What i IPC?

Inter-proce communication (IPC) i a mechanim that allo

procee to communicate ith each other and nchronize their

action. The communication eteen thee procee can e een a a

method of co-operation eteen them.

48. What are the di몭erent IPC mechanim?

Thee are the method in IPC:


Pipe (ame Proce) –

Thi allo a 몭o of data in one direction onl. Analogou to implex

tem (Keoard). Data from the output i uuall u몭ered until the

input proce receive it hich mut have a common origin.

Named Pipe (Di몭erent Procee) –

Thi i a pipe ith a peci몭c name it can e ued in procee that don’t

have a hared common proce origin. .g. i FIFO here the detail

ritten to a pipe are 몭rt named.

Meage Queuing –

Thi allo meage to e paed eteen procee uing either a

ingle queue or everal meage queue. Thi i managed  the

tem kernel thee meage are coordinated uing an API.

emaphore –

Thi i ued in olving prolem aociated ith nchronization and to

avoid race condition. Thee are integer value that are greater than or

equal to 0.

hared memor  –

Thi allo the interchange of data through a de몭ned area of memor .

emaphore value have to e otained efore data can get acce to

hared memor .

ocket –

Thi method i motl ued to communicate over a netork eteen a

client and a er ver. It allo for a tandard connection hich i

computer and O independent

49. What i the di몭erence eteen preemptive and non-preemptive

cheduling?
In preemptive cheduling, the CPU i allocated to the procee for a

limited time herea, in Non-preemptive cheduling, the CPU i

allocated to the proce till it terminate or itche to aiting for

tate.

The executing proce in preemptive cheduling i interrupted in the

middle of execution hen higher priorit one come herea, the

executing proce in non-preemptive cheduling i not interrupted in

the middle of execution and ait till it execution.

In Preemptive cheduling, there i the overhead of itching the

proce from the read tate to running tate, vie-vere, and

maintaining the read queue. Wherea the cae of non-preemptive

cheduling ha no overhead of itching the proce from running

tate to read tate.

In preemptive cheduling, if a high-priorit proce frequentl

arrive in the read queue then the proce ith lo priorit ha to

ait for a long, and it ma have to tar ve. On the other hand, in the

non-preemptive cheduling, if CPU i allocated to the proce having

a larger urt time then the procee ith mall urt time ma

have to tar ve.

Preemptive cheduling attain 몭exiilit  alloing the critical

procee to acce the CPU a the arrive into the read queue, no

matter hat proce i executing currentl. Non-preemptive

cheduling i called rigid a even if a critical proce enter the read

queue the proce running CPU i not ditured.

Preemptive cheduling ha to maintain the integrit of hared data

that ’ h it i cot aociative it hich i not the cae ith Non-

preemptive cheduling.

50. What i the zomie proce?


A proce that ha 몭nihed the execution ut till ha an entr  in the

proce tale to repor t to it parent proce i knon a a zomie

proce. A child proce ala 몭rt ecome a zomie efore eing

removed from the proce tale. The parent proce read the exit

tatu of the child proce hich reap o몭 the child proce entr  from

the proce tale.

51. What are orphan procee?

A proce hoe parent proce no more exit i.e. either 몭nihed or

terminated ithout aiting for it child proce to terminate i called an

orphan proce.

52. What are tar vation and aging in O?

tar vation: tar vation i a reource management prolem here a

proce doe not get the reource it need for a long time ecaue the

reource are eing allocated to other procee.

Aging : Aging i a technique to avoid tar vation in a cheduling tem. It

ork  adding an aging factor to the priorit of each requet. The

aging factor mut increae the priorit of the requet a time pae

and mut enure that a requet ill eventuall e the highet priorit

requet

53. Write aout monolithic kernel?

Apar t from microkernel, Monolithic Kernel i another clai몭cation of

Kernel. Like microkernel, thi one alo manage tem reource

eteen application and hardare, ut uer er vice and kernel

er vice are implemented under the ame addre pace. It increae

the ize of the kernel, thu increae the ize of an operating tem a
ell. Thi kernel provide CPU cheduling, memor  management, 몭le

management, and other operating tem function through tem

call. A  oth er vice are implemented under the ame addre pace,

thi make operating tem execution fater.

54. What i Context itching?

itching of CPU to another proce mean aving the tate of the old

proce and loading the aved tate for the ne proce. In Context

itching the proce i tored in the Proce Control lock to er ve

the ne proce, o that the old proce can e reumed from the ame

par t it a left.

55. What i the di몭erence eteen the Operating tem and kernel?

Operating tem Kernel

Operating tem i tem The kernel i tem oftare that

oftare. i par t of the Microkerneloperating

tem.

Operating tem provide The kernel provide inter face /

inter face / uer and hardare. application and hardare.

It alo provide protection and It main purpoe i memor 

ecurit. management, dik management,

proce management and tak

management.

All tem need a real-time All operating tem need kernel

operating real-time,Microkernel to run.

tem to run.

Tpe of operating tem include Tpe of kernel include Monolithic

ingle and multiuer O, and Micro kernel.

multiproceor O, realtime O,

Ditriuted O.

It i the 몭rt program to load hen It i the 몭rt program to load hen

the computer oot up. the operating tem load


56. What i the di몭erence eteen proce and thread?

.NOProce Thread

1. Proce mean an program Thread mean a egment of a

i in execution. proce.

2. The proce i le e몭cient Thread i more e몭cient in term of

in term of communication. communication.

3. The proce i iolated. Thread hare memor .

4. The proce i called Thread i called lighteight proce.

heav eight the proce.

5. Proce itching ue, Thread itching doe not require to

another proce inter face in call an operating tem and caue

operating tem. an interrupt to the kernel.

6. If one proce i locked The econd, thread in the ame tak

then it ill not a몭ect the could not run, hile one er ver

execution of other proce thread i locked.

7. The proce ha it on Thread ha Parent’ PC, it on

Proce Control lock, tack Thread Control lock and tack and

and Addre pace. common Addre pace.

57. What i PC?

the proce control lock (PC) i a lock that i ued to track the

proce’ execution tatu. A proce control lock (PC) contain

information aout the proce, i.e. regiter, quantum, priorit, etc. The

proce tale i an arra of PC, that mean logicall contain a PC

for all of the current procee in the tem.

58. When i a tem in a afe tate?


The et of dipatchale procee i in a afe tate if there exit at

leat one temporal order in hich all procee can e run to

completion ithout reulting in a deadlock.

59. What i Ccle tealing?

ccle tealing i a method of acceing computer memor  (R AM) or u

ithout inter fering ith the CPU. It i imilar to direct memor  acce

(DMA) for alloing I/O controller to read or rite R AM ithout CPU

inter vention.

60. What i a Trap and Trapdoor?

A trap i a oftare interrupt, uuall the reult of an error condition,

and i alo a non-makale interrupt and ha the highet priorit

Trapdoor i a ecret undocumented entr  point into a program ued to

grant acce ithout normal method of acce authentication.

61.Write a di몭erence eteen proce and program?

r.No. Program Proce

1. Program contain a et of Proce i an intance of an

intruction deigned to executing program.

complete a peci몭c tak.

2. Program i a paive entit a it Proce i anThe proce active

reide in the econdar  entit a it i created during

memor . execution and loaded into the

main memor .

3. The program exit in a ingle Proce exit for a limited pan

place and continue to exit of time a it get terminated

until it i deleted. af ter the completion of a tak.

4. A program i a tatic entit. The proce i a dnamic entit.


r.No. Program Proce

5. Program doe not have an Proce ha a high reource

reource requirement, it onl requirement, it need reource

require memor  pace for like CPU, memor  addre, I/O

toring the intruction. during it lifetime.

6. The program doe not have an The proce ha it on control

control lock. lock called Proce Control

lock.

62.What i a dipatcher?

The dipatcher i the module that give proce control over the CPU

after it ha een elected  the hor t-term cheduler. Thi function

involve the folloing:

itching context

itching to uer mode

Jumping to the proper location in the uer program to retar t that

program

63. De몭ne the term dipatch latenc?

A Dipatch latenc can e decried a the amount of time it take for a

tem to repond to a requet for a proce to egin operation. With a

cheduler ritten peci몭call to honor application prioritie, real-time

application can e developed ith a ounded dipatch latenc.

64.What are the goal of CPU cheduling?

Max CPU utilization [Keep CPU a u a poile]Fair allocation of

CPU.
Max throughput [Numer of procee that complete their execution

per time unit]

Min turnaround time [Time taken  a proce to 몭nih execution]

Min aiting time [Time a proce ait in read queue]

Min repone time [Time hen a proce produce 몭rt repone]

65.What i a critical- ection?

When more than one procee acce the ame code egment that

egment i knon a the critical ection. The critical ection contain

hared variale or reource hich are needed to e nchronized to

maintain conitenc of data variale. In imple term, a critical

ection i a group of intruction/tatement or region of code that

need to e executed atomicall uch a acceing a reource (몭le, input

or output por t, gloal data, etc.).

66. Write the name of nchronization technique?

Mutexe

Condition variale

emaphore

File lock

67. Write a di몭erence eteen a uer-level thread and a kernel-level

thread?

Uer-level thread Kernel level thread

Uer thread are implemented  kernel thread are implemented 

uer. O.

O doen’t recognize uer-level Kernel thread are recognized  O.

thread.

Implementation of Uer thread i Implementation of the per form

ea. kernel thread i complicated.

Context itch time i le. Context itch time i more.

Context itch require no Hardare uppor t i needed.

hardare uppor t.
If one uer-level thread per form If one kernel thread per form a the

a locking operation then entire locking operation then another

proce ill e locked. thread can continue execution.

Uer-level thread are deigned Kernel level thread are deigned a

a dependent thread. independent thread.

68.Write don the advantage of multithreading?

ome of the mot impor tant ene몭t of MT are:

Improved throughput. Man concurrent compute operation and I/O

requet ithin a ingle proce.

imultaneou and full mmetric ue of multiple proceor for

computation and I/O.

uperior application reponivene. If a requet can e launched on

it on thread, application do not freeze or ho the “hourgla”.

An entire application ill not lock or other ie ait, pending the

completion of another requet.

Improved er ver reponivene. L arge or complex requet or lo

client don’t lock other requet for er vice. The overall throughput

of the er ver i much greater.

Minimized tem reource uage. Thread impoe minimal impact

on tem reource. Thread require le overhead to create,

maintain, and manage than a traditional proce.

Program tructure impli몭cation. Thread can e ued to implif  the

tructure of complex application, uch a er ver-cla and

multimedia application. imple routine can e ritten for each

activit, making complex program eaier to deign and code, and

more adaptive to a ide variation in uer demand.

etter communication. Thread nchronization function can e

ued to provide enhanced proce-to-proce communication. In

addition, haring large amount of data through eparate thread of

execution ithin the ame addre pace provide extremel high-

andidth, lo-latenc communication eteen eparate tak

ithin an application
69.Di몭erence eteen Multithreading and Multitaking?

.No. Multi-threading Multi-taking

1. Multiple thread are executing at everal program are executed

the ame time at the ame or concurrentl.

di몭erent par t of the program.

2. CPU itche eteen multiple CPU itche eteen multiple

thread. tak and procee.

3. It i lighteight par t proce. It i a heav eight proce.

4. It i a feature of the proce. It i a feature of O.

5. Multi-threading i haring of Multitaking i haring of

computing reource among computing reource(CPU,

thread of a ingle proce. memor , device, etc.) among

procee.

70.What are the draack of emaphore?

Priorit Inverion i a ig limitation of emaphore.

Their ue i not enforced ut i  convention onl.

The programmer ha to keep track of all call to ait and to ignal

the emaphore.

With improper ue, a proce ma lock inde몭nitel. uch a ituation

i called Deadlock.

71. What i peteron’ approach?


72. De몭ne the term ounded aiting?

A tem i aid to follo ounded aiting condition if a proce ant

to enter into a critical ection ill enter in ome 몭nite time.

73. What are the olution to the critical ection prolem?

There are three olution to the critical ection prolem:

oftare olution

Hardare olution

emaphore

74.What i a anker ’ algorithm?

The anker ’ algorithm i a reource allocation and deadlock avoidance

algorithm that tet for afet  imulating the allocation for

predetermined maximum poile amount of all reource, then

make an “-tate” check to tet for poile activitie, efore deciding

hether allocation hould e alloed to continue.

75. What i concurrenc?

A tate in hich a proce exit imultaneoul ith another proce

than thoe it i aid to e concurrent.

76.Write a draack of concurrenc?

It i required to protect multiple application from one another.

It i required to coordinate multiple application through additional

mechanim.

Additional per formance overhead and complexitie in operating

tem are required for itching among application.

ometime running too man application concurrentl lead to

everel degraded per formance.

77.What are the iue related to concurrenc?

Non-atomic –

Operation that are non-atomic ut interruptile  multiple

procee can caue prolem.


Race condition –

A race condition occur of the outcome depend on hich of everal

procee get to a point 몭rt.

locking –

Procee can lock aiting for reource. A proce could e

locked for a long period of time aiting for input from a terminal. If

the proce i required to periodicall update ome data, thi ould

e ver  undeirale.

tar vation –

It occur hen a proce doe not otain er vice to progre.

Deadlock –

It occur hen to procee are locked and hence neither can

proceed to execute

78.Wh do e ue precedence graph?

A precedence graph i a directed acclic graph that i ued to ho the

execution level of everal procee in the operating tem. It ha the

folloing proper tie alo:

Node of graph correpond to individual tatement of program

code.

An edge eteen to node repreent the execution order.

A directed edge from node A to node  ho that tatement A

execute 몭rt and then tatement  execute

79. xplain the reource allocation graph?

The reource allocation graph i explained to u hat i the tate of the

tem in term of procee and reource. One of the advantage of

having a diagram i, ometime it i poile to ee a deadlock directl

 uing R AG.

80. Ho to recover from a deadlock?

We can recover from a deadlock  folloing method:

Proce termination
Aor t all the deadlock procee

Aor t one proce at a time until the deadlock i eliminated

Reource preemption

Rollack

electing a victim

81. What i the goal and functionalit of memor  management?

The goal and functionalit of memor  management are a follo;

Relocation

Protection

haring

Logical organization

Phical organization

82. Write a di몭erence eteen phical addre and logical addre?

.NO. Parameter Logical addre Phical Addre

1. aic It i the vir tual addre The phical addre i a

generated  CPU. location in a memor  unit.

2. Addre et of all logical et of all phical

addree generated  addree mapped to the

CPU in reference to a correponding logical

program i referred a addree i referred a

Logical Addre pace. Phical Addre.

3. Viiilit The uer can vie the The uer can never vie

logical addre of a phical addre of


program. program

4. Acce The uer ue the logical The uer can not directl

addre to acce the acce phical addre

phical addre.

5. Generation The Logical Addre i Phical Addre i

generated  the CPU Computed  MMU

83. xplain addre inding?

The A ociation of program intruction and data to the actual phical

memor  location i called the Addre inding.

84. Write di몭erent tpe of addre inding?

Addre inding divided into three tpe a follo.

Compile-time Addre inding

Load time Addre inding

xecution time Addre inding

85. Write an advantage of dnamic allocation algorithm?

When e do not kno ho much amount of memor  ould e

needed for the program eforehand.

When e ant data tructure ithout an upper limit of memor 

pace.

When ou ant to ue our memor  pace more e몭cientl.

Dnamicall created lit iner tion and deletion can e done ver 

eail jut  the manipulation of addree herea in cae of

taticall allocated memor  iner tion and deletion lead to more

movement and atage of memor .

When ou ant ou to ue the concept of tructure and linked lit in

programming, dnamic memor  allocation i a mut

86. Write a di몭erence eteen internal fragmentation and external


fragmentation?

.NO Internal fragmentation xternal fragmentation

1. In internal fragmentation In external fragmentation, variale-

몭xed-ized memor , lock ized memor  lock quare meaure

quare meaure appointed appointed to method.

to proce.

2. Internal fragmentation xternal fragmentation happen hen

happen hen the method the method or proce i removed.

or proce i larger than the

memor .

3. The olution of internal olution of external fragmentation i

fragmentation i et-몭t compaction, paging and

lock. egmentation.

4. Internal fragmentation xternal fragmentation occur hen

occur hen memor  i memor  i divided into variale ize

divided into 몭xed ized par tition aed on the ize of

par tition. procee.

5. The di몭erence eteen The unued pace formed eteen

memor  allocated and non-contiguou memor  fragment

required pace or memor  are too mall to er ve a ne proce,

i called Internal i called xternal fragmentation.

fragmentation.

87. De몭ne the Compaction?

The proce of collecting fragment of availale memor  pace into

contiguou lock  moving program and data in a computer ’

memor  or dik.

88. Write aout the advantage and diadvantage of a hahed page


tale?

Advantage

The main advantage i nchronization.

In man ituation, hah tale turn out to e more e몭cient than

earch tree or an other tale lookup tructure. For thi reaon, the

are idel ued in man kind of computer oftare, par ticularl for

aociative arra, dataae indexing, cache, and et.

Diadvantage

Hah colliion are practicall unavoidale. hen hahing a random

uet of a large et of poile ke.

Hah tale ecome quite ine몭cient hen there are man colliion.

Hah tale doe not allo null value, like a hah map.

89. Write a di몭erence eteen paging and egmentation?

.NO Paging egmentation

1. In paging, program i In egmentation, program i divided

divided into 몭xed or into variale ize ection.

mounted ize page.

2. For the paging operating For egmentation compiler i

tem i accountale. accountale.

3. Page ize i determined  Here, the ection ize i given  the

hardare. uer.

4. It i fater in the egmentation i lo.

comparion of
.NO Paging egmentation

egmentation.

5. Paging could reult in egmentation could reult in external

internal fragmentation. fragmentation.

6. In paging, logical addre Here, logical addre i plit into

i plit into that page ection numer and ection o몭et.

numer and page o몭et.

7. Paging comprie a page While egmentation alo comprie

tale hich encloe the the egment tale hich encloe

ae addre of ever  egment numer and egment o몭et.

page.

8. A page tale i emploed to ection Tale maintain the ection

keep up the page data. data.

9. In paging, operating In egmentation, the operating tem

tem mut maintain a maintain a lit of hole in the main

free frame lit. memor .

10. Paging i inviile to the egmentation i viile to the uer.

uer.

11. In paging, proceor need In egmentation, proceor ue

page numer, o몭et to egment numer, o몭et to calculate

calculate aolute addre. the full addre.

90. Write a de몭nition of A ociative Memor  and Cache memor ?


.No. A ociative Memor  Cache Memor 

1 A memor  unit acce  content Fat and mall memor  i called

i called aociative memor . cache memor .

2 It reduce the time required to It reduce the average memor 

몭nd the item tored in memor . acce time.

3 Here data i acceed  it Here, data are acceed  it

content. addre.

4 It i ued here earch time i It i ued hen a par ticular

ver  hor t. group of data i acceed

repeatedl.

5 It aic characteritic i it logic It aic characteritic i it fat

circuit for matching it content. acce

91. What i “Localit of reference”?

The localit of reference refer to a phenomenon in hich a computer

program tend to acce the ame et of memor  location for a

par ticular time period. In other ord, Localit of Reference refer to

the tendenc of the computer program to acce intruction hoe

addree are near one another.

92. Write don the advantage of vir tual memor ?

A higher degree of multiprogramming.

Allocating memor  i ea and cheap

liminate external fragmentation

Data (page frame) can e cattered all over PM

Page are mapped appropriatel ana

L arge program can e ritten, a vir tual pace availale i huge

compared to phical memor .

Le I/O required lead to fater and ea apping of procee.


More phical memor  i availale, a program are tored on vir tual

memor , o the occup ver  le pace on actual phical memor .

More e몭cient apping

93. Ho to calculate per formance in vir tual memor ?

The per formance of vir tual memor  of a vir tual memor  management

tem depend on the total numer of page fault, hich depend on

“paging policie” and “frame allocation”.

몭ective acce time = (1-p) x Memor  acce time + p x page fault time

94. Write don the aic concept of the 몭le tem?

A 몭le i a collection of related information that i recorded on econdar 

torage. Or 몭le i a collection of logicall related entitie. From uer ’

perpective, a 몭le i the mallet allotment of logical econdar 

torage.

95. Write the name of di몭erent operation on 몭le?

Operation on 몭le:

Create

Open

Read

Write

Rename

Delete

Append

Truncate

Cloe

96. De몭ne the term it-Vector?


A itmap or it Vector i a erie or collection of it here each it

correpond to a dik lock. The it can take to value: 0 and 1: 0

indicate that the lock i allocated and 1 indicate a free lock.

97. What i a File allocation tale?

FAT tand for File Allocation Tale and thi i called o ecaue it

allocate di몭erent 몭le and folder uing tale. Thi a originall

deigned to handle mall 몭le tem and dik. A 몭le allocation tale

(FAT) i a tale that an operating tem maintain on a hard dik that

provide a map of the cluter (the aic unit of logical torage on a

hard dik) that a 몭le ha een tored in.

98. What i rotational latenc?

Rotational Latenc: Rotational L atenc i the time taken  the deired

ector of the dik to rotate into a poition o that it can acce the

read/rite head. o the dik cheduling algorithm that give minimum

rotational latenc i etter.

99. What i eek time?

eek Time : eek time i the time taken to locate the dik arm to a

peci몭ed track here the data i to e read or rite. o the dik

cheduling algorithm that give minimum average eek time i etter.

100. What i u몭er?

A u몭er i a memor  area that tore data eing tranferred eteen

to device or eteen a device and an application.

Lat Minute Note – Operating tem

We ill oon e covering more Operating tem quetion. Pleae

rite comment if ou 몭nd anthing incorrect, or ou ant to hare

more information aout the topic dicued aove.

You might also like