CT3 Ak Set C

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 8

SRM Institute of Science and Technology

College of Engineering and Technology


School of Computing
SET - C
Department of Computing Technologies
Academic Year: 2022-23 (EVEN)
Answer Key

Test: CLA-T3 Date: 09.11.23


Course Code & Title: 18CSE356T Distributed Operating Systems Duration: 110 minutes
Year & Sem: III& IV Year / V & VII Sem Max. Marks: 50

Course Articulation Matrix: (to be placed)


S. COs PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 PSO1 PSO2 PSO3
No

1 CO1 3 - - 3 - - - - - - - - - - -

2 CO2 3 3 - 2 - - - - - - - - - - -

3 CO3 3 2 2 3 - - - - - - - - - 2 -

4 CO4 3 3 - 3 - - - - - - - - - 2 -

5 CO5 3 3 - 2 - - - - - - - - - - -

Part – A
(10 x 1 = 10 Marks)
Instructions: Answer all
Q. Question Marks BL CO PO PI
No Code
1 Mutex are like 1 L1 4 1 1.6.1
a) Status code
b) Binary Semaphores
c) Threads
d) Address Space.
2 What are characteristics of process migration? 1 L1 4 1 1.6.1
a) Transfer data by entire file or immediate portion
required
b) Transfer the computation rather than the data
c) Execute an entire process or parts of it at different
sites
d) None of the mentioned
3 When a process wants to start up a child process, it goes 1 L2 4 1 1.6.1
around and checks out who is currently offering the
service that it needs.
a) Heuristic Algorithm.
b) Bidding Algorithm.
c) Hierarchical Algorithm.
d) Centralized Algorithm.
4 Transient faults occur 1 L1 4 1 1.6.1
a) Once then disappears
b) twice then disappears
c) Thrice and then disappears
d) does not occur at all.
5 ________represents solution to the static scheduling 1 L2 4 1 1.6.1
problem that requires a reasonable amount of time and
other resources to perform its function.
a) Approximate solution
b) Heuristics solution
c) Optimal solution
d) Suboptimal solution
6 To connect n-CPUs and n-Memory modules. 1 L2 5 1 1.6.1
a) n2 Crosspoint switches required.
b) n Crosspoint switches required.
c) 2n Crosspoint switches required.
d) 2n2 Crosspoint switches required.
7 An Example of NUMA machine 1 L1 5 1 1.6.1
a) Linda Orca
b) Ivy Mirage
c) DEC Firefly
d) BBN Butterfly
8 Splitting the cache into separate instruction and data 1 L2 5 1 1.6.1
caches or by using a set of buffers, usually called
a) Cache Buffer
b) Data Buffer
c) Instruction Buffer
d) Register Buffer
9 …………………also need to invalidate pages when a 1 L1 5 1 1.6.1
new writer suddenly appears
a) DSM and LRU
b) Dash and Memnet
c) Page table
d) Page manager
10 The capability of a system to adapt the increased service 1 L1 5 1 1.6.1
load is called ___________
a) scalability
b) tolerance
c) capacity
d) none of the mentioned
SRM Institute of Science and Technology

College of Engineering and Technology


School of Computing
SET - C
Department of Computing Technologies
Academic Year: 2022-23 (EVEN)
Question Paper

Test: CLA-T3 Date: 09.11.23


Course Code & Title: 18CSE356T Distributed Operating Systems Duration: 110 minutes
Year & Sem: III& IV Year / V & VII Sem Max. Marks: 50

Course Articulation Matrix: (to be placed)


S. COs PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 PSO1 PSO2 PSO3
No

1 CO1 3 - - 3 - - - - - - - - - - -

2 CO2 3 3 - 2 - - - - - - - - - - -

3 CO3 3 2 2 3 - - - - - - - - - 2 -

4 CO4 3 3 - 3 - - - - - - - - - 2 -

5 CO5 3 3 - 2 - - - - - - - - - - -

Part B
(5x2 = 10 Marks)
11 Explain the Several states of threads. 2 L2 4 1 1.6.1
Ans:
running, blocked, ready, or terminated.
12 State and Explain why do we need diskless workstations. 2 L4 4 2 2.6.2
Ans:
 If the workstations are diskless, the file system
must be implemented by one or more remote file
servers. Diskless workstations are cheaper.
 Ease of maintenance: installing new release of
program on several servers than on hundreds of
machines. Backup and hardware maintenance are
also simpler.
 Diskless does not have fans and noises.
13 List the drawbacks of A Centralized Algorithm. 2 L4 4 2 2.6.2
Ans:
• Do not scale well to large systems.
• The central node soon becomes a bottleneck, not to
mention a single point of failure.
14 State the desirable properties of scheduling algorithm. 2 L3 5 1 1.6.1
Ans:
Utilization of CPU
Throughput.
Response Time
Turnaround time
Waiting time
Fairness
15 Summarize the advantages of object-based distributed 2 L5 5 1 1.6.1
shared memory.
Ans:
 It is more modular than the other techniques.
 The implementation is more flexible because
accesses are controlled.
 Synchronization and access can be integrated
together cleanly.
Part – C (4*5=20 Marks)
Answer any 4 out of 6 questions
16 Explain the following. 5 L4 4 2 2.6.4
❑ Single-threshold policy
❑ double-threshold policy
Ans:
Single threshold policy
❑ Single-threshold policy may lead to unstable
algorithm because underloaded node could turn to
be overloaded right after a process migration
❑ To reduce instability double-threshold policy has
been proposed which is also known as high-low
policy
Double threshold policy
❑ When node is in overloaded region new
local processes are sent to run remotely,
requests to accept remote processes are
rejected
❑ When node is in normal region new local
processes run locally, requests to accept
remote processes are rejected
❑ When node is in underloaded region new
local processes run locally, requests to
accept remote processes are accepted.
17 Illustrate the steps to run the process remotely? 5 L4 4 2 2.6.2
Ans:
 To start with, it needs the same view of the file
system, the same working directory, and the same
environment variables.
 Some system calls can be done remotely but some
cannot.
 For example, read from keyboard and write to the
screen can never be carried out on remote machine.
(All system calls that query the state of machine)
 Some must be done remotely, such as the UNIX
system calls SBRK (adjust the size of the data
segment), NICE (set CPU scheduling priority), and
PROFIL (enable profiling of the program counter).
18 State the basic idea of load sharing policy and explain the 5 L3 4 2 2.6.2
disadvantages of it.
Ans:
❑ Basic ideas for Load-sharing approach
❑ It is necessary and sufficient to prevent
nodes from being idle while some other
nodes have more than two processes
❑ Load-sharing is much simpler than load-
balancing since it only attempts to ensure
that no node is idle when heavily node
exists
❑ Priority assignment policy and migration
limiting policy are the same as that
for the load-balancing algorithms
❑ Drawbacks of Load-balancing approach
❑ Load balancing technique with attempting
equalizing the workload on all the nodes is
not an appropriate object since big
overhead is generated by gathering exact
state information
❑ Load balancing is not achievable since
number of processes in a node is always
fluctuating and temporal unbalance among
the nodes exists every moment

19 Explain briefly about Write Through Cache Consistency 5 L2 5 2 2.6.2


protocol.
Ans:
 One particularly simple and common protocol is
called write through.
 When a CPU first reads a word from memory,
that word is fetched over the bus and is stored in
the cache of the CPU making the request.
 If that word is needed again later, the CPU can
take it from the cache without making a memory
request, thus reducing bus traffic.
There are two cases, read miss (word not cached) and read
hit (word cached) as the first two lines in the table. In
simple systems, only the word requested is cached, but in
most, a block of words of say, 16 or 32 words, is transferred
and cached on the initial access and kept for possible future
use.
20 Illustrate the Rules for Release consistency. 5 L2 5 2 2.6.2
Ans:
 1. Before an ordinary access to a shared variable is
performed, all previous acquires done by the
process must have completed successfully.
 2. Before a release is allowed to be performed, all
previous reads and writes done by the process must
have completed.
 3. The acquire and release accesses must be
processor consistent (sequential consistency is not
required).

21 Write briefly about 5 L5 5 2 2.6.2


i) Strict Consistency
ii) Causal Consistency
Ans:
Strict Consistency
 The most stringent consistency model is
called strict consistency. It is defined by the
following condition:
 Any read to a memory location x returns the value
stored by the most recent write operation to x.
 When memory is strictly consistent, all writes are
instantaneously visible to all processes and an
absolute global time order is maintained.

Causal Consistency
 For a memory to be considered causally consistent,
it is necessary that the memory obey the following
condition:
 Writes that are potentially causally related must be
seen by all processes in the same order. Concurrent
writes may be seen in a different order on different
machines.
 The causal consistency model represents a
consistency that it makes a distinction between
events that are potentially causally related and those
that are not.

Part – D (1*10=10 Marks)


Answer any 1 question
22 State and explain the design and implementation issues of 10 L5 5 2 2.6.2
Distributed Shared Memory.
Ans:
Issues to Design and Implementation of DSM:
 Granularity
 Structure of shared memory space
 Memory coherence and access synchronization
 Data location and access
 Replacement strategy
 Thrashing
 Heterogeneity
1. Granularity: Granularity refers to the block size of a
DSM system. Granularity refers to the unit of sharing and
the unit of data moving across the network when a
network block shortcoming then we can utilize the
estimation of the block size as words/phrases. The block
size might be different for the various networks.
2.Structure of shared memory space: Structure refers to
the design of the shared data in the memory. The structure
of the shared memory space of a DSM system is
regularly dependent on the sort of applications that the
DSM system is intended to support.
3. Memory coherence and access synchronization: In
the DSM system the shared data things ought to be
accessible by different nodes simultaneously in the
network. The fundamental issue in this system is data
irregularity. The data irregularity might be raised by the
synchronous access. To solve this problem in the DSM
system we need to utilize some synchronization
primitives, semaphores, event count, and so on.
4. Data location and access: To share the data in the
DSM system it ought to be possible to locate and retrieve
the data as accessed by clients or processors. Therefore the
DSM system must implement some form of data block
finding system to serve network data to meet the
requirement of the memory coherence semantics being
utilized.
5. Replacement strategy: In the local memory of the node
is full, a cache miss at the node implies not just a get of
the gotten to information block from a remote node but
also a replacement. A data block of the local memory
should be replaced by the new data block. Accordingly, a
position substitution methodology is additionally vital in
the design of a DSM system.
6. Thrashing: In a DSM system data blocks move
between nodes on demand. In this way on the off chance
that 2 nodes compete for write access to the single data
item. The data relating data block might be moved to back
and forth at such a high rate that no genuine work can get
gone. The DSM system should utilize an approach to keep
away from a situation generally known as thrashing.
7. Heterogeneity: The DSM system worked for
homogeneous systems and need not address the
heterogeneity issue. In any case, assuming the underlined
system environment is heterogeneous, the DSM system
should be designed to deal with heterogeneous, so it works
appropriately with machines having different
architectures.

Or

23 Explain Real time distributed systems and Real time 10 L4 4 2 2.6.2


communication. List the design issues in Real time
distributed systems.
Ans:
• A real-time system interacts with the external world
in a way that involves time.
• Such a system is required to complete its work and
deliver its services on a timely basis.
• In most cases, a late answer, even if correct, is
considered a system failure.
Examples:
• Air traffic control system
• Automobile brake system
• Aircraft autopilot system
• Laser eye surgery system
• Tsunami early detection system

Real time Communication:


• Real-time connections need to be established
between distant machines to provide predictability.
• QoS negotiated in advance to guarantee maximum
delay, jitter, and minimum bandwidth.
• Additional resources needed: memory buffers, table
entries, CPU cycles, and link capacity.
• Fault tolerance: Duplicate packets over duplicate
links (information and physical redundancy).
• Time-Triggered Protocol (TTP) used in the MARS
real-time distributed system.
• Redundancy
• Each node is comprised of one or more
CPUs.
• Nodes connected by dual TDMA networks.
• Every node maintains the global state of the
system: mode, time, and membership.
• Clock synchronization is critical - each node uses
packet skew to adjust clocks.
• Node gets rejected by all receiving nodes if :
• Message is missing from expected slot.
• Expected acknowledgement is missing.
• Expected global state doesn’t match (CRC).
• Periodic ‘initialization’ packet broadcast with
global state.

Course Outcome (CO) and Bloom’s level (BL) Coverage in Questions

Approved by the Audit Professor/Course Coordinator

You might also like