Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

PART I.

1. In your perspective, what makes counting semaphore primitives a good concurrency


mechanism?

Semaphores are advantageous because they typically operate independently of


machines. The machine-free code of the microkernel includes semaphores. They prohibit
several cycles from entering the fundamental segment. Because everyone is holding up
semaphore, there won't ever be a waste of contact time or resources. By using atomic
actions like "Wait" and "Signal," which are intended to synchronize processes, semaphores
are also utilized to address the fundamental segment problem. If it is positive, "wait" refers
to the value of S decreasing. If S is negative or zero, there won't be any action taken. As
you can see in the example, these actions have been used as opposed to the "Signal," which
increases the value of S.
2. How does the structure of counting semaphore primitives differ from binary
semaphore primitives?
- Binary semaphores are similar to counting semaphores but only have a value of
0 or 1. The wait action may succeed while the semaphore is 1, and the signal operation may
succeed when the semaphore is 0. Sometimes it is simpler to do two semaphores than it is to
count semaphores. Counting semaphores, on the other hand, have a representational number
of assets that can be accessed. The semaphore count naturally increases when assets are
added, and it naturally lowers when assets are removed.
PART II.
3. Briefly explain the purpose of the semWaitB and semSignalB functions Figure 2.
- The semSignalB removes an interaction P from the queue and locates the cycle P in the
prepared rundown, changing the value to "one". While the main purpose of semWaitB
is to place the cycle that is being performed underneath it in a queue that is currently
empty.
4. Based on Figures 1 and 2, which semaphore structure is easier to implement and why?
- In the diagram, 0 indicates that a process or thread is in the critical area, and that the
other process or thread should wait for it to finish. However, 1 indicates that no process
has access to the shared resources, and the vital section is unoccupied. No two processes
can be in the critical area at the same moment, ensuring mutual exclusion. With this
structure, coding is reduced, but execution and handling are greatly improved. Lastly,
on my observation, what I think is that the structure in Figure 2 is much easier to
construct because the idea behind Binary Semaphore is that it allows each cycle to enter
the critical area in turn, allowing it to access shared resources).

PART III.
1. Deduce at least one (1) characteristic of a monitor based on Figure 3. Elaborate on your
answer.
- A few conditions are also being handled in the holding region behind the monitor of the
ongoing process to aid the system in continuing. There are a few mechanisms that work
in the waiting area to support those cycles as the process progresses. Various processes
are running on the monitor or the display I guess, including one that is unique. Occurs in
the area where the screen is being held up. When a process enters local data, it goes
through a procedure in which the conditions are set and will be applied to the process
that enters the data. You see, when information is handled in the same way, the same
thing happens.
2. Would you agree that a monitor, as a concurrency mechanism, can support process
synchronization? Why or why not?

Yes, that is my response—absolutely. A nice illustration of both asset definition and asset
management is a monitor module. systems or processes that are completely to blame. The
procedure makes advantage of such approaches, which serve as entry points to shared
resources, to access the asset. To accomplish Process synchronization, one sort of
theoretical data that can be employed is a monitor. The screen is supported by
programming languages in order to provide shared mutual exclusion processes. As a
result, mutual exclusions between monitor-based tasks were kept in place. Processes that
try to enter a monitor entry queue while the monitor is active are slowed down.

You might also like