Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Q 4.

3) Which of the following components of program state are shared across


threads in a multithreaded process?
a. Register values
b. Heap memory
c. Global variables
d. Stack memory

Ans

stack : no

registers: no

heap: yes (if you have to choose y or n, the true answers is it depends)

globals: yes

Q 4.6) What are two differences between user-level threads and kernel-level
threads? Under what circumstances is one type better than the other?
Ans

User-level threads Kernel-level threads


The existence of user- level threads is unknown to The existence of the kernel-level
the kernel. threads is known to the kernel.

User-level threads are managed without kernel Kernel-level threads are managed by
support. the operating system.

user-level threads are faster to create than are Kernel-level threads are user-level
kernel-level threads. threads.
Kernel-level threads are scheduled by
User-level threads are scheduled by the thread the kernel.
library.

Circumstances where kernel-level threads are better than user-level threads:

• If the kernel is single-threaded, then kernel-level threads are better than user-level
threads, because any user-level thread performing a blocking system call will cause
the entire process to block, even if other threads are available to run within the
application.
• For example a process P1 has 2 kernel level threads and process P2 has 2 user-level
threads. If one thread in P1 gets blocked, its second thread is not affected. But in case
of P2 if one thread is blocked (say for I/O), the whole process P2 along with the 2nd
thread gets blocked.
• In a multiprocessor environment, the kernel-level threads are better than user-level
threads, because kernel-level threads can run on different processors simultaneously
while user-level threads of a process will run on one processor only even if multiple
processors are available.
Circumstances where user-level threads are better than kernel-level threads:

If the kernel is time shared, then user-level threads are better than kernel-level threads,
because in time shared systems context switching takes place frequently. Context
switching between kernel level threads has high overhead, almost the same as a process
whereas context switching between user-level threads has almost no overhead as
compared to kernel level threads.

Q. 4.10) What resources are used when a thread is created? How do they differ
from those used when a process is created?
Ans

When a thread is created the threads does not require any new resources to execute the
thread shares the resources like memory of the process to which they belong to. The benefit
of code sharing is that it allows an application to have several different threads of activity all
within the same address space. Whereas if a new process creation is very heavyweight
because it always requires new address space to be created and even if they share the
memory then the inter process communication is expensive when compared to the
communication between the threads.

Q 4.15) Describe the actions taken by a thread library to context-switch between


user-level threads.
Ans

Context switching between user threads is quite similar to switching between kernel
threads, although it is dependent on the threads library and how it maps user threads to
kernel threads. In general, context switching between user threads involves taking a user
thread of its LWP and replacing it with another thread. This act typically involves saving
and restoring the state of the registers.
6.2) Explain why interrupts are not appropriate for implementing synchronization
primitives in multiprocessor systems.

Ans

Interrupts are not sufficient in multiprocessor systems since disabling interrupts only
prevents other processes from executing on the processor in which interrupts were disabled;
there are no limitations on what processes could be executing on other processors and
therefore the process disabling interrupts cannot guarantee mutually exclusive access to
program state.

6.11) Show that, if the wait () and signal () semaphore operations are not
executed atomically, then mutual exclusion may be violated.

Ans

A wait operation atomically decrements the value associated with a semaphore. If two wait
operations are executed on a semaphore when its valueis1,if the two operations are not
performed atomically, then it is possible that both operations might proceed to decrement
the semaphore value, thereby violating mutual exclusion.

6.12) Show how to implement the wait() and signal() semaphore operations
in multiprocessor environments using the TestAndSet () instruction.
The solution should exhibit minimal busy waiting. (PLEASE CONFIRM BEFORE
WRITING)
Ans

int guard = 0;
int semaphore value = 0;
wait()
{
while (TestAndSet(&guard) == 1);
if (semaphore value == 0) {
atomically add process to a queue of processes
waiting for the semaphore and set guard to 0;
} else {
semaphore value--;
guard = 0;
}
}
signal()
{
while (TestAndSet(&guard) == 1);
if (semaphore value == 0 &&
there is a process on the wait queue)
wake up the first process in the queue
of waiting processes
else
semaphore value++;
guard = 0;
}

6.19) Describe two kernel data structures in which race conditions are possible.
Be sure to include a description of how a race condition can occur.

Ans

Some kernel data structures include a process id (pid) management system, kernel process table, and
scheduling queues.With a pid management system, it is possible two processes may be created at the
same time and there is a race condition assigning each process a unique pid. The same type of race
condition can occur in the kernel process table: two processes are created at the same time and there is
a race assigning them a location in the kernel process table. With scheduling queues, it is possible one
process has been waiting for IO which is now available. Another process is being context-switched out.
These two processes are being moved to the Runnable queue at the same time. Hence there is a race
condition in the Runnable queue.

6.26) Discuss the trade off between fairness and throughput of operations
in the readers-writers problem. Propose a method for solving the
readers-writers problem without causing starvation.

Ans

Throughput in the readers-writers problem is increased by favouring multiple readers as


opposed to allowing a single writer to exclusively access the shared values. On the other
hand, favouring readers could result in starvation for writers. The starvation in the readers-
writers problem could be avoided by keeping timestamps associated with waiting processes.
When a writer is finished with its task, it would wakeup the process that has been waiting
for the longest duration. When a reader arrives and notices that another reader is accessing
the database, then it would enter the critical section only if there are no waiting writers.
These restrictions would guarantee fairness.

6.28) What is the meaning of the term busy waiting? What other kinds of
waiting are there in an operating system? Can busy waiting be avoided
altogether? Explain your answer.

Ans
Busy waiting means that a process is waiting for a condition to be satisfied in a tight loop
without relinquishing the processor. Alternatively, a process could wait by relinquishing the
processor, and block on a condition and wait to be awakened at some appropriate time in
the future. Busy waiting can be avoided but incurs the overhead associated with putting a
process to sleep and having to wake it up when the appropriate program state is reached.

6.29) Demonstrate that monitors and semaphores are equivalent insofar as


they can be used to implement the same types of synchronization
problems.

Ans

A Monitor is an object designed to be accessed from multiple threads. The member


functions or methods of a monitor object will enforce mutual exclusion, so only one
thread may be performing any action on the object at a given time. If one thread is
currently executing a member function of the object then any other thread that tries to
call a member function of that object will have to wait until the first has finished.

A Semaphore is a lower-level object. You might well use a semaphore to implement a


monitor. A semaphore essentially is just a counter. When the counter is positive, if a
thread tries to acquire the semaphore then it is allowed, and the counter is decremented.
When a thread is done then it releases the semaphore, and increments the counter.

If the counter is already zero when a thread tries to acquire the semaphore then it has to
wait until another thread releases the semaphore. If multiple threads are waiting when a
thread releases a semaphore then one of them gets it. The thread that releases a
semaphore need not be the same thread that acquired it.

A monitor is like a public toilet. Only one person can enter at a time. They lock the door
to prevent anyone else coming in, do their stuff, and then unlock it when they leave.

A semaphore is like a bike hire place. They have a certain number of bikes. If you try and
hire a bike and they have one free then you can take it, otherwise you must wait. When
someone returns their bike then someone else can take it. If you have a bike then you
can give it to someone else to return --- the bike hire place doesn't care who returns it,
as long as they get their bike back.

6.34) Race conditions are possible in many computer systems. Consider


a banking system with two functions: deposit (amount) and withdraw
(amount). These two functions are passed the amount that is to
be deposited or withdrawn from a bank account. Assume a shared
bank account exists between a husband and wife and concurrently the
husband calls the withdraw() function and the wife calls deposit().
Describe how a race condition is possible and what might be done to
prevent the race condition from occurring.
Ans

You might also like