Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

Name: Peter Lim

Chapter 3:

1. Original versions of Apple’s mobile iOS operating system provided no means of


concurrent processing. Discuss three major complications that concurrent processing adds to an
operating system.

- The OS needs to keep track on the address of the memory space because processes
might affect other processes
- Time overhead is caused by switching from one process to another. It requires storing
and loading register values from its Program Control Block
- If a running process needs a large space in memory, then the other processes would
be returned to the hard disk, thus leading to Time overhead.

2. The Sun UltraSPARC processor has multiple register sets. Describe what happens
when a context switch occurs if the new context is already loaded into one of the register sets.
What happens if the new context is in memory rather than in a register set and all the register sets
are in use?

- The CPU current-register-set pointer is changed to point to the set containing the new
context, which takes very little time. If the context is in memory, one of the contexts
in a register set must be chosen and be moved to memory, and the new context must
be loaded from memory into the set. This process takes a little more time than on
systems with one set of registers, depending on how a replacement victim is selected.

3. Describe the differences among short-term, medium-term, and long term scheduling.

- The short-term scheduler selects from the ready processes the next process to run and
gives it the CPU. The long-term scheduler selects from the pool of processes that are
waiting on disk and loads the selected processes into memory. These processes have
not yet begun their execution. The medium-term scheduler takes processes that are
currently in memory and selects those to be swapped out to disk. These processes will
be swapped back in at a later point. This is done to improve process mix or because of
memory requirements

4. Give an example of a situation in which ordinary pipes are more suitable than named
pipes and an example of a situation in which named pipes are more suitable than ordinary pipes.

- Simple Communication is one of the compatibilities of an Ordinary Pipe


- One example, a producer writes the file to a pipe and the user reads the file and
counts the characters in a file
- Another example in which named pipes are more suitable is when there are several
processes that can write messages in a log.
- When several processes wish to write a message to a log, they write it to the named
log, wherein the server reads the messages from the named pipe and writes them into
the log file

5. What are the benefits and the disadvantages of each of the following? Consider both
the system level and the programmer level.
a. Synchronous and asynchronous communication
- Advantage: Allows rendezvous between a sender and a receiver.
- Disadvantage: Because of a blocking send, a rendezvous may not be needed and
the message would be delivered asynchronously, therefore message-passing gives both
synchronization forms

b. Automatic and explicit buffering


- Automatic Buffering assures the sender that they won’t have to block while
waiting to copy a message
- Explicit Buffering specifies the size of the buffer. The sender may be blocked
while waiting for available space

c. Send by copy and send by reference


- Send by reference allow the receiver to change the state of parameter; but send
by copy doesn’t
- Send by reference allow the programmer to write a distributed version of a
centralized application

d. Fixed-sized and variable-sized messages


- With the use of fixed-sized, the messages are copied from the address space of
the sender to the address space of the receiving process
- Larger messages use shared memory to pass the message

Chapter 4:

1. Under what circumstances does a multithreaded solution using multiple kernel threads
provide better performance than a single-threaded solution on a single-processor system?

- When a kernel thread suffers a page fault, another kernel thread can be switched in to
use the interleaving time in a useful manner. A single-threaded process, on the other
hand, will not be capable of performing useful work when a page fault takes place.
- When a program experiences frequent page faults, multithread solutions would
perform better
2. Can a multithreaded solution using multiple user-level threads achieve better
performance on a multiprocessor system than on a single processor system? Explain.

- A multithreaded system comprising of multiple user level threads cannot make use of
the different processors in a multiprocessor system simultaneously. The operating
system sees only a single process and will not schedule the different threads of the
process on separate processors.

3. Is it possible to have concurrency but not parallelism? Explain.

- Having Parallelism is not possible without Concurrency


- It is possible to have Concurrency but not Parallelism
- A system is parallel if it can perform more than one task simultaneously. In contrast,
a concurrent system supports more than one task by allowing all the tasks to make
progress

4. Consider a multicore system and a multithreaded program written using the many-to-
many threading model. Let the number of user-level threads in the program be greater than the
number of processing cores in the system. Discuss the performance implications of the following
scenarios.
a. The number of kernel threads allocated to the program is less than the number of
processing cores.

- Some processors would remain idle because schedulers only maps kernel threads to
processors

b. The number of kernel threads allocated to the program is equal to the number of
processing cores.

- One downside is that a corresponding processor will remain idle when a kernel thread
blocks inside the kernel
- It is also possible to utilize all the processors simultaneously

c. The number of kernel threads allocated to the program is greater than the number of
processing cores but less than the number of user-level threads.

- A blocked kernel has a chance to be swapped of another kernel thread that is ready to
be executed
- Utilization increases
Chapter 5

1. Explain why implementing synchronization primitives by disabling interrupts is not


appropriate in a single-processor system if the synchronization primitives are to be used in user-
level programs.

- User-level programs are given the permission to use a processor without letting other
processes execute by simply disabling the timer interrupt
- Note that User-level programs are given permission to disable interrupts

2. The Linux kernel has a policy that a process cannot hold a spinlock while attempting to
acquire a semaphore. Explain why this policy is in place.

- Sleep might be needed for you to acquire a semaphore


- While holding a spin lock, sleep cannot be performed
- Therefore, holding a spin lock while getting a semaphore cannot be possible

3. Explain why interrupts are not appropriate for implementing synchronization


primitives in multiprocessor systems.

- Interrupts are not enough for a multiprocessor system


- No limitations are present on what processes would be executing
- Interrupts cannot assure exclusive access
- Interrupts only prevents other processes from executing on the processor in which
interrupts were disabled

Chapter 6

1. Why is it important for the scheduler to distinguish I/O-bound programs from CPU-
bound programs?

- I/O-bound programs have the property of performing only a small amount of


computation before performing I/O.
- CPU-bound programs use their quantum without blocking I/O operations

2. Discuss how the following pairs of scheduling criteria conflict in certain settings.
a. CPU utilization and response time
- Utilization is increased if context switching is minimized
- Resulting to the increase in response time

b. Average turnaround time and maximum waiting time


- By initializing the execution short tasks, Average turnaround time is minimized
- By doing so, waiting time is increased
c. I/O device utilization and CPU utilization
- Running long CPU-bound tasks without context switching will maximize CPU
utilization
- I/O device utilization is maximized by scheduling I/O-bound jobs as soon as they
become ready to run

3. Consider a system implementing multilevel queue scheduling. What strategy can a


computer user employ to maximize the amount of CPU time allocated to the user’s process?

- The program could maximize the CPU time allocated to it by not fully utilizing its
time quantum. It could use a large fraction of its assigned quantum, but relinquish the
CPU before the end of the quantum, thereby increasing the priority associated with
the process.

4. Explain why interrupt and dispatch latency times must be bounded in a hard real-time
system.

- Both interrupt and dispatch latency needs to be minimized in order to ensure that real-
time tasks receive immediate attention.
- Time periods must be bounded to assure the desired quality of service

You might also like