Professional Documents
Culture Documents
Os Test 2 Key
Os Test 2 Key
Rule 3: When a job enters the system, it is placed at the highest priority (the topmost queue)
Rule 4a: If a job uses up an entire time slice while running, its priority is reduced (i.e., it moves down one
queue)
Rule 4b: If a job gives up CPU before the time slice is up, it stays at the same priority level
Rule 5: After some time period S, move all the jobs in the system to the topmost queue
Assumptions:
process address space must be placed contiguously in physical memory
its size is less than the size of physical memory
each address space is exactly the same size
1) C) Illustrate TLB control flow Algorithm with an example An Array In A Tiny Address Space 3M
Example: Access an Array
70% (m,h,h,m,h,h,h,m,h,h)
larger pages
Offset
00 04 08 12 16
VPN = 00
VPN = 01
VPN = 02
VPN = 03
VPN = 04
VPN = 05
VPN = 06 a[0] a[1] a[2]
VPN = 07 a[3] a[4] a[5] a[6]
VPN = 08 a[7] a[8] a[9]
VPN = 09
VPN = 10
VPN = 11
VPN = 12
VPN = 13
VPN = 14
VPN = 15
Priority boost
The simple idea here is to periodically boost the priority of all the jobs in system. There are many ways to achieve
this, but lets just do something simple: throw them all in the topmost queue; hence, a new rule:
Rule 5: After some time period S, move all the jobs in the system to the topmost queue
no starvation
Two graphs are shown below. The One there is no priority boost, and thus the long-running job gets
starved once the two short jobs arrive; on another there is a priority boost every 50 ms (which is likely too
small of a value, but used here for the example), and thus we at least guarantee that the long-running job
will make some progress, getting boosted to the highest priority every 50 ms and thus getting to run
periodically
no priority boost
Coarse grained Segmentation Means segmentation in a smaller no (code, stack, heap). segmentation as coarse-
grained, as it chops up the address space into relatively large, coarse chunks.
Fine-grained Segmentation Means segmentation with large no. more flexible and allowed for address spaces to
consist of a large number of smaller segments, referred to as fine-grained segmentation.
Fully associative: any given translation can be anywhere in TLB, and hardware will search the entire TLB
in parallel to find the desired translation
in page table, when a PTE is marked invalid, it means that the page has not been allocated by the
process
a TLB valid bit refers to whether a TLB entry has a valid translation within it [useful for context switching]
3) A) Imagine two processes, A and B, and further that A has 75 tickets while B has only 25. Thus, what
we would like is for A to receive 75% of the CPU and B the remaining 25% explain the basic Concept
behind Tickets Represent Your Share 4M
Lottery scheduling achieves this probabilistically (but not deterministically) by holding a lottery every so often
(say, every time slice). Holding a lottery is straightforward: the scheduler must know how many total tickets
there are (in our example, there are 100). The scheduler then picks a winning ticket, which is a number from 0
to 991 . Assuming A holds tickets 0 through 74 and B 75 through 99, the winning ticket simply determines
whether A or B runs. The scheduler then loads the state of that winning process and runs it. Here is an example
output of a lottery schedulers winning tickets:
63 85 70 39 76 17 29 41 36 39 10 99 68 83 63 62 43 0 49 49
AAAAAAAAAAAAAAAABBBB
As you can see from the example, the use of randomness in lottery scheduling leads to a probabilistic correctness
in meeting the desired proportion, but no guarantee. In our example above, B only gets to run 4 out of 20 time
slices (20%), instead of the desired 25% allocation. However, the longer these two jobs compete, the more likely
they are to achieve the desired percentages.
3)B)Apply Splitting and Coalescing with an example 3M
a process, when run on a particular CPU, builds up a fair bit of state in the caches (and TLBs) of the CPU. The
next time the process runs, it is often advantageous to run it on the same CPU, as it will run faster if some of its
state is already present in the caches on that CPU. If, instead, one runs a process on a different CPU each time,
the performance of the process will be worse, as it will have to reload the state each time it runs (note it will run
correctly on a different CPU thanks to the cache coherence protocols of the hardware). Thus, a multiprocessor
scheduler should consider cache affinity when making its scheduling decisions, perhaps preferring to keep a
process on the same CPU if at all possible.
5)B)Write down Where Are Page Tables Stored? List out Whats Actually In The Page Table? 3M
The page table is just a data structure that is used to map virtual addresses (or really, virtual page numbers) to
physical addresses (physical frame numbers). Thus, any data structure could work. The simplest form is called a
linear page table, which is just an array. The OS indexes the array by the virtual page number (VPN), and looks
up the page-table entry (PTE) at that index in order to find the desired physical frame number (PFN)
5)C)Analyze how Hybrid Approach can be used for Paging and Segments3M
Random, which simply picks a random page to replace under memory pressure. Random has properties similar to
FIFO;