Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

November 2012 Bachelor of Computer Application (BCA) Semester 3 BC0042 Operating Systems 4 Credits

(Book ID: B0682) Total Marks: 60 Marks Answer All Questions. Each question carries equal marks. (6x10) 1. What are the services provided by Operating Systems? Explain briefly. Base services - It is a part of the standard OS.

Extended services - These are add-on modular software components that are layered on top of base service.

2. Common types of Services 1.Execution of Program This is the first and foremost services that is provided by the operating system to users. The computing system must execute and terminate all user programs successfully by loading them into main memory. A proper error message must be displayed for the incomplete programs. 2.Input/output operations During normal execution of a program user may be required to input data or may need some processed data as output. It is the job of the operating system to obtain data from the input device or from some data files and send the output to a data file or output device. 3.File Management The user of the computing system may need to create the files on secondary storage to record their work permanently for future use or delete the file that is already in existence. The operating system must provide a mechanism that carefully handles all these file manipulation operations on the respective devices. 4.Communication In the computing system there can be more number of programs that need to share information among them. The program may be running on a single machine or may be run on several machines. This phenomenon is known as ?Inter process communication?. To maintain data security and integrity, an operating system must handle inter process communication. 5.Error detection If any recourses or device in a computing system comes across any error it should be promptly handled by the operating system. Invalid or improper use of devices by the programs must be reported. The most common types of errors are: memory allocation error, divide by zero error, power off error etc.

6.Protection The operating system should provide the services to all users by ensuring the security of access by authenticated users to same important files and data. 7. Accounting In case of multiple users of the computing system the information of duration and number of resources that are used by each user may be tracked by the operating system for accounting purpose. 8.Resource Sharing and Allocation When there are more number of users in the computing system or there are many programs competing for limited resources, then the optimal allocation and deallocation of resources are the responsibility of operating system to ensure increased throughput.
2 What is Micro-kernel? What are the benefits of Micro-kernel?

Microkernels were developed in the 1980s as a response to changes in the computer world, and to several challenges adapting existing "mono-kernels" to these new systems. New device drivers, protocol stacks, file systems and other low-level systems were being developed all the time. This code was normally located in the monolithic kernel, and thus required considerable work and careful code management to work on. Microkernels were developed with the idea that all of these services would be implemented as user-space programs, like any other, allowing them to be worked on monolithically and started and stopped like any other program. This would not only allow these services to be more easily worked on, but also separated the kernel code to allow it to be finely tuned without worrying about unintended side effects. Moreover, it would allow entirely new operating systems to be "built up" on a common core, aiding OS research ADVANDATGE

Maintenance is generally easier. Patches can be tested in a separate instance, then swapped in to take over a production instance. Rapid development time, new software can be tested without having to reboot the kernel. More persistence in general, if one instance goes hay-wire, it is often possible to substitute it with an operational mirror

3. Draw the diagram for Unix Kernel components and explain about each components briefly.

In computing, the kernel is the main component of most computer operating systems; it is a bridge between applications and the actual data processing done at the hardware level. The kernel's responsibilities include managing the system's resources (the communication between hardware and software components).[1] Usually, as a basic component of an operating system, a kernel can provide the lowest-level abstraction layer for the resources (especially processors and I/O devices) that application software must control to perform its function. It typically makes these facilities available to application processes through inter-process communication mechanisms and system calls. Operating system tasks are done differently by different kernels, depending on their design and implementation. While monolithic kernels execute all the operating system code in the same address space to increase the performance of the system, microkernels run most of the operating system services in user space as servers, aiming to improve maintainability and modularity of the operating system.[2] A range of possibilities exists between these two extremes.
The kernel's primary function is to manage the computer's resources and allow other programs to run and use these resources.[1] Typically, the resources consist of: The Central Processing Unit. This is the most central part of a computer system, responsible for running or executing programs. The kernel takes responsibility for deciding at any time which of the many running programs should be allocated to the processor or processors (each of which can usually run only one program at a time) The computer's memory. Memory is used to store both program instructions and data. Typically, both need to be present in memory in order for a program to execute. Often multiple programs will want access to memory, frequently demanding more memory than the computer has available. The kernel is responsible for deciding which memory each process can use, and determining what to do when not enough is available. Any Input/Output (I/O) devices present in the computer, such as keyboard, mouse, disk drives, USB devices, printers, displays, network adapters, etc. The kernel allocates requests from applications to perform I/O to an appropriate device (or subsection of a device, in the case of files on a disk or windows on a display) and provides convenient

methods for using the device (typically abstracted to the point where the application does not need to know implementation details of the device). Key aspects necessary in resource managements are the definition of an execution domain (address space) and the protection mechanism used to mediate the accesses to the resources within a domain.[1] Kernels also usually provide methods for synchronization and communication between processes called inter-process communication (IPC). A kernel may implement these features itself, or rely on some of the processes it runs to provide the facilities to other processes, although in this case it must provide some means of IPC to allow processes to access the facilities provided by each other. Finally, a kernel must provide running programs with a method to make requests to access these facilities. 4. Explain seven state process models used for OS with necessary diagram. Carefully reconsider the simple process implementation scheme of the previous section from the point of view of the individual programs, or tasks, being executed in each process. Do they need to know about their execution context, or about their being repeatedly suspended and restarted by the OS? Of course not, as long as they can rely upon the OS providing them the resources they need. The OS needs context information to perform its job of allocating resources among tasks according to a certain policy, and the application programmers who write the tasks usually don't want (and sometimes must not be allowed) to care about that policy. Moreover, as long as the OS represents faithfully and completely the state of a suspended task in its context, that task can be restarted at any time without it taking notice of the past suspension. So far we have considered only two possible conditions (or states) for a process, namely running and not-running (we'll use the term suspended in another well-defined sense later on): a running process has control of the machine's CPU and is executing its task, a not running process is in the OS process list, and will crunch some CPU time when the OS think it should. We have already noticed, among the reasons that can cause the OS to switch a running process into non-running state, the need to wait for the completion of an I/O event, and the expiration of a quantum of CPU time allotted to the process. We'll encounter other reasons further on. It's useful, at this point, to identify the possible transitions between process states. We can, for example draw (do it) a state transition diagram with four nodes, one for each of the so far identified states, plus one to point at a new process and one to a terminated process, and call admission the creation of a new process, which is put into not-running state, dispatch its transition into running state, pause its transition from running into notrunning, and exit the termination of a running process. This way of dealing with the two transitions which are not between running/not-running states, namely the admission and the exit, makes apparently sense, since a running process may be disposed immediately as a consequence of an arror condition, without further need to pass it into not-running state;

a new process must at least wait for the current process to be paused before gaining control of the CPU (here we obviously refer to a single processor architecture). The new and terminated states are worth a bit of more explanation. The former refer to a process that has just been defined (e.g. because an user issued a command at a terminal), and for which the OS has performed the necessary housekeeping chores. The latter refers to a process whose task is not running anymore, but whose context is still being saved (e.g. because an user may want to inspect it using a debugger program). A simple way to implement this process handling model in a multiprogramming OSgif would be to mantain a queue (i.e. a first-in-first-out linear data structure) of processes, put at the end the queue the current process when it must be paused, and run the first process in the queue. However, it's easy to realize that this simple two-state model does not work. Given that the number of processes that an OS can manage is limited by the available resources, and that I/O events occur at much larger time scale that CPU events, it may well be the case that the first process of the queue must still wait for an I/O event before being able to restart; even worse, it may happen that most of the processes in the queue must wait for I/O. In this condition the scheduler would just waste its time sifting the queue in search of a runnable process. A solution is to split the not-running process class according to two possible conditions: processes blocked in the wait for an I/O event to occur, and processes in pause, but nonetheless ready to run when given a chance. A process would then be put from running into blocked state on account of an event wait transition, would go running to ready state due to a timeout transition, and from blocked to ready due to event occurred transition.

Modern architectures rely on a hyerarchical organization of the available memory, only part of which exists in silicon on the machine board, while the rest is paged on a high speed storage peripheral like a hard disk. We will address this topic in detail later on; for now it will sufice to say that while the processor has the capability to address memory well above the amount of available RAM, it physically accedes the RAM only, and dedicated hardware is used to copy (or page) from the disk to RAM those pieces of memory which the processor refers to and are not available on RAM, and copy back areas of RAM onto disk when RAM space needs be made available. In particular, the act of copying entire processes back and forth from RAM to disk and viceversa is called swapping. As we'll see further on, swapping is often mandated by efficiency considerations even if paged memory is available, since the performance of a paged memory system can fall abruptly if too many active processes are mantained in main memory. The OS could then perform a suspend transition on blocked processes, swapping them on disk and marking their state as suspended (after all, if they must wait for I/O, they might as well do it out of costly RAM),the solution is to carefully reconsider the reasons why processes are blocked and swapped, and recognize that if a process is blocked because it waits for I/O, and then suspended, the I/O event might occur while it sits

swapped on the disk, We can thus classify suspended processes into two classes: ready-suspended for those suspended process whose restarting condition has occurred, and blocked-suspended for those who must still wait instead. This classification allows the OS to pick from the good pool of ready-suspended processes, when it wants to revive the queue in main memory. Provisions must be made for passing processes between the new states. In our model this means allowing for new transitions: activate and suspend between ready and ready-suspended, and between blocked-suspended and blocked as well 5. Define process and threads and differentiate between them.

Process Each process provides the resources needed to execute a program. A process has a virtual address space, executable code, open handles to system objects, a security context, a unique process identifier, environment variables, a priority class, minimum and maximum working set sizes, and at least one thread of execution. Each process is started with a single thread, often called the primary thread, but can create additional threads from any of its threads. Thread A thread is the entity within a process that can be scheduled for execution. All threads of a process share its virtual address space and system resources. In addition, each thread maintains exception handlers, a scheduling priority, thread local storage, a unique thread identifier, and a set of structures the system will use to save the thread context until it is scheduled. The thread context includes the thread's set of machine registers, the kernel stack, a thread environment block, and a user stack in the address space of the thread's process. Threads can also have their own security context, which can be used for impersonating clients.

What is a virtual memory? What are its significance in Operating system.?

Most operating systems of desktop computers have a common part known as virtual memory. The use of virtual memory is so common because of its benefits provided to user at a lower cost. Today most of the computers have RAM of capacity 64 or 128 MB to be used by the CPU resources. This amount of RAM is not sufficient to run all applications that are used by most users in their expected way and all at once. For example, an e-mail program, a web browser and a word processor is loaded into RAM simultaneously; the 64 MB space is not enough to store all these programs. In the absence of virtual memory, a message Sorry, you cannot load any more applications. Please close an application to load a new one. would be displayed. With the use of virtual memory, a computer can look for empty areas of RAM which is not being used currently and copies them on to the hard disk device. This process frees up RAM to load new applications. As it is done automatically, the user do not even know that it is happening, and the user feels like RAM has unlimited space even though the

RAM capacity is 32 MB. The fact that hard disk space is much cheaper than RAM chips allows virtual memory also as a nice economic benefit. The RAM images are stored in the hard disk and known as page file. It contains the pages of RAM, and the data moves back and forth between the pages file and RAM by the operating system. The page files on a machine with Windows operating system have a .SWP extension. The read/write speed of a hard disk drive is slower than RAM. The hard disks technology is not upgraded to access small pieces of data at a given point of time. If the reliance of a system is too heavy on virtual memory, a significant performance drop would be noticed. The key for this is to have sufficient amount of RAM to handle all aspects that tend to work simultaneously. The feel of slowness of virtual memory is a slight pause at the time of changing tasks. Virtual memory works beautifully with sufficient RAM availability. If not, the operating system must swap information back and forth between RAM and the hard disk constantly. This process is called trashing, and also causes the feel to the user that the computer system is incredibly slow.

You might also like