SEM 3 BC0042 Operating Systems

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

BCA 3rd Semester BC0042 01 OPERATING SYSTEMS

1. What is Micro-kernel? What are the benefits of Micro-kernel?


Micro-kernels We have already seen that as UNIX expanded, the kernel became large and difficult to manage. In the mid-1980s, researches at Carnegie Mellon University developed an operating system called Mach that modularized the kernel using the microkernel approach. This method structures the operating system by removing all nonessential components from the kernel and implementing then as system and user-level programs. The result is a smaller kernel. There is little consensus regarding which services shouldremain in the kernel and which should be implemented in user space. Typically, however, micro-kernels provide minimal process and memory management, in addition to a communication facility.

Device Drivers

File Server

Client Process

Virtual Memory

Microkernel Hardware Fig. 2.3: Microkernel Architecture The main function of the microkernel is to provide a communication facility between theclient program and the various services that are also running in user space. Communication is provided by message passing. For example, if the client program andservice never interact directly. Rather, they communicate indirectly by exchangingmessages with the microkernel. On benefit of the microkernel approach is ease of extending the operating system. All new services are added to user space and consequently do not require modification of the kernel.When the kernel does have to be modified, the changes tend to be fewer, because themicrokernel is a smaller kernel. Th e resulting operating system is easier to port from one hardware design to another. The microkernel also provided more security and reliability, since most services are running as user rather than kernel processes, if a service fails the rest of the operating system remains untouched. Several contemporary operating systems have used the microkernel approach. Tru64 UNIX(formerly Digital UNIX provides a UNIX interface to the user, but it is implemented with a March kernel. The March kernel maps UNIX system calls into messages to the appropriate user-level services. The following figure shows the UNIX operating system architecture. At the center is hardware, covered by kernel. Above that are the UNIX utilities, and command interface, such as shell (sh), etc.

1.

Explain seven state process models used for OS with necessary diagram. Differentiate between process and threads.

Seven State Process Model The following figure 3.2 shows the seven state process model in which uses above described swapping technique.

Apart from the transitions we have seen in five states model, following are the new transitions which occur in the above seven state model. Blocked to Blocked / Suspend: If there are now ready processes in the main memory, at least one blocked process is swapped out to make room for another process that is not blocked. Blocked / Suspend to Blocked: If a process is terminated making space in the main memory, and if there is any high priority process which is blocked but suspended, anticipating that it will become free very soon, the process is brought in to the main memory. Blocked / Suspend to Ready / Suspend: A process is moved from Blocked /Suspend to Ready / Suspend, if the event occurs on which the process was waiting, as there is no space in the main memory. Ready / Suspend to Ready: If there are no ready processes in the main memory, operating system has to bring one in main memory to continue the execution. Sometimes this transition takes place even there are ready processes in main memory but having lower priority than one of the processes in Ready/Suspend state. So the high priority process is brought in the main memory.

Ready to Ready / Suspend: Normally the blocked processes are suspended by the operating system but sometimes to make large block free, a ready processmay be suspended. In this case normally the low priority processes are suspended. New to Ready / Suspend: When a new process is created, it should be added to the Ready state. But sometimes sufficient memory may not be available to all locate to the newly created process. In this case, the new process is sifted to Ready / Suspend Processes Vs Threads As we mentioned earlier that in many respect threads operate in the same way as that of processes. Some of the similarities and differences are: Differences Unlike processes, threads are not independent of one another. Unlike processes, all threads can access every address in the task . Unlike processes, threads are designed to assist one other. (Processes might or might not assist one another because processes may originate from different users.)

2.

What are the jobs of CPU scheduler? Explain any two scheduling algorithm.

CPU Scheduler Whenever the CPU becomes idle, it is the job of the CPU Scheduler (a.k.a. the short-term scheduler) to select another process from the ready queue to run next. The storage structure for the ready queue and the algorithm used to select the next process are not necessarily a FIFO queue. There are several alternatives to choose from, as well as numerous adjustable parameters for each algorithm, which is the basic subject of this entire unit. Preemptive Scheduling CPU scheduling decisions take place under one of four conditions: 1. When a process switches from the running state to the waiting state, such as for an I/O request or invocation of the wait ( ) system call. 2. When a process switches from the running state to the ready state, for example in response to an interrupt. 3. When a process switches from the waiting state to the ready state, say at completion of I/O or a return from wait ( ). 4. When a process terminates. For conditions 1 and 4 there is no choice A new process must be selected. For conditions 2 and 3 there is a choice To either continue running the current process, or select a different one. If scheduling takes place only under conditions 1 and 4, the system is said to be Non-preemptive, or cooperative . Under these conditions, once aprocess starts running it keeps running, until it either voluntarily blocks or until itfinishes. Otherwise the system is said to be pr eemptive. Windows used non-preemptive scheduling up to Windows 3.x, and started using pre-emptive scheduling with Win95. Macs used non-preemptive prior to OSX, and pre-emptive since then. Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt. It is to be noted that pre-emptive scheduling can cause problems when two processes share data, because one process may get interrupted in the middle of updating shared data structures. Preemption can also be a problem if the kernel is busy implementing a system call (e.g.updating critical kernel data structures) when the preemption occurs. Most modern UNIXes deal with this problem by making the process wait until the system call has either completed or blocked before allowing the preemption Unfortunately this solutionis problematic for real-time systems, as realtime response can no longer beguaranteed. Some critical sections of code protect themselves from concurrencyproblems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section. Needless to say, this should only be done in rare situations, and only on very short pieces of code that will finish quickly, (usually just a few machine instructions. ) Dispatcher The dispatcher is the module that gives control of the CPU to the process selected bythe scheduler. This function involves: Switching context. Switching to user mode. Jumping to the proper location in the newly loaded program.

The dispatcher needs to be as fast as possible, as it is run on every context switch. The time consumed by the dispatcher is known as dispatch latency. Scheduling Algorithms The following subsections will explain several common scheduling strategies, looking at only a single CPU burst each for a small number of processes. Obviously real systems have to deal with a lot more simultaneous processes executing their CPU-I/O burst cycles. First-Come First-Serve Scheduling, FCFS FCFS is very simple Just a FIFO queue, like customers waiting in line at the bank or the post office or at a copying machine. Unfortunately, however, FCFS can yield some very long average wait times, particularly if the first process to get there takes a long time. For example, consider the following three processes:

In the first Gantt chart below, process P1 arrives first. The average waiting time for the three processes is (0 + 24 + 27) / 3 = 17.0 ms. In the second Gantt chart below, the same three processes have an average wait t ime of (0 + 3 + 6) / 3 = 3.0ms. The total run time for the three bursts is the same, but in the second case two of the three finish much quicker, and the other process is only delayed by a short amount.

FCFS can also block the system in a busy dynamic system in another way, known as the convoy effect. When one CPU intensive process blocks the CPU, a number of I/O intensive processes can get backed up behind it, leaving the I/O devices idle. When the CPU hog finally relinquishes the CPU, then the I/O processes pass through the CPU quickly, leaving the CPU idle while everyone queues up for I/O, and then the cycle repeats itself when the CPU intensive process gets back to the ready queue. Shortest-Job-First Scheduling, SJF The idea behind the SJF algorithm is to pick the quickest fastest little job that needs to be done, get it out of the way first, and then pick the next smallest fastest job to do next.(Technically this algorithm picks a process based on the next shortest CPU burst, not the overall process time.). For example, the Gantt chart below is based upon thefollowing CPU burst times, (and the assumption that all jobs arrive at the same time.)

In the case above the average wait time is (0 + 3 + 9 + 16) / 4 = 7.0 ms, (as opposed to 10.25 ms for FCFS for the same processes.) SJF can be proven to be the fastest scheduling algorithm, but it suffers from one important problem: How do you know how long the next CPU burst is going to be? For long-term batch jobs this can be done based upon the limits that users set for their jobs when they submit them, which encourages them to set low limits, but risks their having to re-submit the job if they set the limit too low. However that does not work for short-term CPU scheduling on an interactive system. Another option would be to statistically measure the run time characteristics of jobs, particularly if the same tasks are run repeatedly and predictably. But once again that really isnt a viable option for short term CPU scheduling in the real world. A more practical approach is to predict the length of the next burst, based on some historical measurement of recent burst times for this process. One simple, fast, and relatively accurate method is the exponential average, which can be defined as follows. estimate[ i + 1 ] = alpha * burst[ i ] + ( 1.0 alpha ) * estimate[ i ] In this scheme the previous estimate contains the history of all previous times, and alpha serves as a weighting factor for the relative importance of recent data versus past history. If alpha is1.0, then past history is ignored, and we assume the next burst

will be the same length as the last burst. If alpha is 0.0, then all measured burst times are ignored, and we just assume a constant burst time. Most commonly alpha is set at 0.5, as illustrated in Figure 5.3: Fig. 5.3: Prediction of the length of the next CPU burst SJF can be either preemptive or non-preemptive. Preemption occurs when a newprocess arrives in the ready queue that has a predicted burst time shorter than the time remaining in the process whose burst is currently on the CPU. Preemptive SJF is sometimes referred to as shortest remaining time first scheduling. For example, the following Gantt chart is based upon the following data:

The average wait time in this case is ( (5 3) + (10 1) + (17 2)) / 4 = 26 / 4 = 6.5 ms.(As opposed to 7.75 ms for nonpreemptive SJF or 8.75 for FCFS.)

3.

Explain the algorithm of Petersons method for mutual exclusion.

Mutual exclusion by Petersons Method: The algorithm uses two variables, flag, a Boolean array and turn, an integer. A true flag value indicates that the process wants to enter the critical section. The variable turn holds the id of the process whose turn it is. Entrance to the critical section is granted for process P0 if P1 does not want to enter its critical section or if P1 has given priority to P0 by setting turn to 0. flag[0]=false; flag[1]=false; turn = 0; /* Process 0 */\ while (true) {flag[0] = true; turn = 1; while (flag[1] && turn == 1) /* no operation */; /* critical section */; flag[0] = false; /* remainder */; } /* Process 1 */ while (true) { flag[1] = true; turn = 0; while(flag[0] && turn == 0) /* no operation */; /* critical section */; flag[1] = false; /* remainder */; }

4.

Explain how the block size is affected on I/O operation toread the file.

Figure-1 shows the general I/O structure associated with many medium-scale processors. Note that the I/O controllers and main memory are connected to the main system bus. The cache memory (usually found on-chip with the CPU) has a direct connection to the processor, as well as to the system bus

Note that the I/O devices shown here are not connected directly to the system bus, they interface with another device called an I/O controller. In simpler systems, the CPU may also serve as the I/O controller, but in systems where throughput and performance are important, I/O operations are generally handled outside the processor. Until relatively recently, the I/O performance of a system was somewhat of an afterthought for systems designers. The reduced cost of high-performance disks, permitting the proliferation of virtual memory systems, and the dramatic reduction in the cost of high-quality video display devices, have meant that designers must pay much more attention to this aspect to ensure adequate performance in the overall system. Because of the different speeds and data requirements of I/O devices, different I/O strategies may be useful, depending on the type of I/O device which is connected to the computer. Because the I/O devices are not synchronized with the CPU, some information must be exchanged between the CPU and the device to ensure that the data is received reliably. This interactionbetween the CPU and an I/O device is usually referred to as handshaking. For a completehandshake, four events are important: The device providing the data (the talker) must indicate that valid data is now available. The device accepting the data (the listener) must indicate that it has accepted the data. This signal informs the talker that it need not maintain this data word on the data bus any longer. The talker indicates that the data on the bus is no longer valid, and removes the data from the bus. The talker may then set up new data on the data bus. The listener indicates that it is not now accepting any data on the data bus. the listener may use data previously accepted during this time, while it is waiting for more data to become valid on the bus. Note that each of the talker and listener supply two signals. The talker supplies a signal (say, data valid, or DAV) at step (1). It supplies another signal (say, data not valid, or) at step (3).Both these signals can be coded as a single binary value (DAV) which takes the value 1 at step (1) and 0 at step (3). The listener supplies a signal (say, data accepted, or DAC) at step (2). It supplies a signal (say, data not now accepted, or ) at step (4). It, too, can be coded as a single binary variable, DAC . Because only two binary variables are required, the handshakinginformation can be communicated over two wires, and the form of handshaking described above is called a two wire Handshake . Other forms of handshaking are used in more complex situations; for example, where there may be more than one controller on the bus, or where the communication is among several devices. Figure 2 shows a timing diagram for the signals DAV and DAC which identifies the timing of the four events described previously

Either the CPU or the I/O device can act as the talker or the listener. In fact, the CPU may act as a talker at one time and a listener at another. For example, when communicating with a terminal screen (an output device) the CPU acts as a talker, but when communicating with a terminal keyboard (an input device) the CPU acts as a listener.

5.

Explain programmed I/O and interrupt I/O. How they differ?

Interrupt-controlled I/O reduces the severity of the two problems mentioned for program-controlled I/O by allowing the I/O device itself to initiate the device service routine in the processor. This is accomplished by having the I/O device generate an interrupt signal which is tested directly by the hardware of the CPU. When the interrupt input to the CPU is found to be active, the CPU itself initiates a subprogram call to somewhere in the memory of the processor; the particular address to which the processor branches on an interrupt depends on the interrupt facilities available in the processor. The simplest type of interrupt facility is where the processor executes a subprogram branch to some specific address whenever an interrupt input is detected by the CPU. The return address (the location of the next instruction in the program that was interrupted) is saved by the processor as part of the interrupt process. If there are several devices which are capable of interrupting the processor, then with this simple interrupt scheme the interrupt handling routine must examine each device to determine which one caused the interrupt. Also, since only one interrupt can be handled at a time, there is usually a hardware priority encoder which allows the device with the highest priority to interrupt the processor, if several devices attempt to interrupt the processor simultaneously. In Figure -3, the handshake out outputs would be connected to a priority encoder to implement this type of I/O. the other connections remain the same. (Some systems use a daisy chain priority system to determine which of the interrupting devices is serviced first. Daisy chain priority resolution is discussed later.) In most modern processors, interrupt return points are saved on a stack in memory, in the same way as return addresses for subprogram calls are saved. In fact, an interrupt can often be thought of as a subprogram which is invoked by an external device. If a stack is used to save the return address for interrupts, it is then possible to allow one interrupt the interrupt handling routine of another interrupt. In modern computer systems, there are often several priority levels of interrupts, each of which can be disabled, or masked. There is usually one type of interrupt input which cannot be disabled (a non-maskable interrupt) which has priority over all other interrupts. This interrupt input is used for warning the processor of potentially catastrophic events such as an imminent power failure, to allow the processor to shut down in an orderly way and to save as much information as possible. Most modern computers make use of vectored interrupts. With vectored interrupts, it is the responsibility of the interrupting device to provide the address in main memory of the interrupt servicing routine for that device. This means, of course, that the I/O device itself must have sufficient intelligence to provide this address when requested by the CPU, and also to be initially programmed with this address information by the processor. Although somewhat more complex than the simple interrupt system described earlier, vectored interrupts provide such a significant advantage in interrupt handling speed and ease of implementation (i.e., a separate routine for each device) that this method is almost universally used on modern computer systems. Some processors have a number of special inputs for vectored interrupts (each acting much like the simple interrupt described earlier). Others require that the interrupting device itself provide the interrupt address as part of the process of interrupting the processor

You might also like