Harshit Os File

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 31

HARSHIT(10CS032)

EXPERIMENT-1 AIM: TO STUDY OPERATING SYSTEM


DEFINITION OF OPERATING SYSTEM:
An operating system (OS) is a set of software that manages computer hardware resources and provides common services forcomputer programs. The operating system is a vital component of the system software in a computer system. Application programs require an operating system to function.Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting for cost allocation of processor time, mass storage, printing, and other resources.For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is usually executed directly by the hardware and will frequently make a system call to an OS function or be interrupted by it. Operating systems can be found on almost any device that contains a computerfrom cellular phones and video game consoles to supercomputers and web servers. Examples of popular modern operating systems include Android, BSD, iOS, Linux, Mac OS X, Microsoft Windows, Windows Phone, and IBM z/OS. All these, except Windows and z/OS, share roots in UNIX.

Real-time:A real-time operating system is a multitasking operating system that aims at executing real-time applications. Real-time operating systems often use specialized scheduling algorithms so that they can achieve a deterministic nature of behavior. The main objective of real-time operating systems is their quick and predictable response to events. They have an event-driven or timesharing design and often aspects of both. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts. Multi-user:A multi-user operating system allows multiple users to access a computer system concurrently. Time-sharing systems can be classified as multi-user systems as they enable multipleuser access to a computer through the sharing of time. Single-user operating systems, as opposed to multi-user operating systems, are usable by a single user at a time. Being able to use multiple accounts on a Windows operating system does not make it a multi-user system. Rather, only the network administrator is the real user. Multi-tasking vs. single-tasking:When only a single program is allowed to run at a time, the system is grouped as a single-tasking system. However, when the operating system allows the execution of multiple tasks at one time, it is classified as a multi-tasking operating system. Multitasking can be of two types: pre-emptive or co-operative. In pre-emptive multitasking, the operating system slices the CPU time and dedicates one slot to each of the programs. Unix-like operating systems such as Solaris and Linux support pre-emptive multitasking, as doesAmigaOS. Cooperative multitasking is achieved by relying on each process to give time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking. 32bit versions, both Windows NT and Win9x, used pre-emptive multi-tasking. Mac OS prior to OS X used to support cooperative multitasking.

HARSHIT(10CS032)

Distributed:A distributed operating system manages a group of independent computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine. When computers in a group work in cooperation, they make a distributed system.

FUNCTIONS OF OPERATING SYSTEM

The major functions of an OS are:


1.Resource management:The resource management function of an OS allocates computer resources such as CPU time, main memory, secondary storage, and input and output devices for use. 2. Data management: The data management functions of an OS govern the input and output of the data and their location, storage, and retrieval. 3. Job (task) management:The job management function of an OS prepares, schedules, controls, and monitors jobs submitted for execution to ensure the most efficient processing. A job is a collection of one or more related programs and their data.A job is a collection of one or more related programs and their data. 4. Standard means of communication between user and computer:The OS establishes a standard means of communication between users and their computer systems. It does this by providing a user interface and a standard set of commands that control the hardware.

Typical Day-to-Day Uses of an Operating System: 1. Executing application programs. 2. Formatting floppy diskettes. 3. Setting up directories to organize your files. 4. Displaying a list of files stored on a particular disk. 5. Verifying that there is enough room on a disk to save a file. 6. Protecting and backing up your files by copying them to other disks for safekeeping. Operating system capabilities can be described in terms of 1.the number of users they can accommodate at one time, 2.how many tasks can be run at one time, and 3.how they process those tasks. Number of Users:A single-user operating system allows only one user at a time to access a computer. Most operating systems on microcomputers, such as DOS and Window 95, are singleuser access systems. A multiuser operating system allows two or more users to access a computer at the same time (UNIX) .the actual number of users depends on the hardware and the OS design.

HARSHIT(10CS032)

Time sharing allows many users to access a single computer. This capability is typically found on large computer operating systems where many users need access at the same time. Number of Tasks:An operating system can be designed for single tasking or multitasking.A single tasking operating system allows only one program to execute at a time, and the program must finish executing completely before the next program can begin.A multitasking operating system allows a single CPU to execute what appears to be more than one program at a time. Context switching allows several programs to reside in memory but only one to be active at a time. The active program is said to be in the foreground. The other programs in memory are not active and are said to be in the background. Instead of having to quit a program and load another, you can simply switch the active program in the foreground to the background and bring a program from the background into the foreground with a few keystrokes.

Operating Systems Services Following are the five services provided by operating systems to the convenience of the users. Program Execution:-The purpose of computer systems is to allow the user to execute programs. So the operating system provides an environment where the user can conveniently run programs. The user does not have to worry about the memory allocation or multitasking or anything. These things are taken care of by the operating systems.Running a program involves the allocating and deallocating memory, CPU scheduling in case of multiprocess. These functions cannot be given to the user-level programs. So user-level programs cannot help the user to run programs independently without the help from operating systems. I/O Operations:-Each program requires an input and produces output. This involves the use of I/O. The operating systems hides the user the details of underlying hardware for the I/O. All the user sees is that the I/O has been performed without any details. So the operating system by providing I/O makes it convenient for the users to run programs.For efficiently and protection users cannot control I/O so this service cannot be provided by user-level programs. File System Manipulation:-The output of a program may need to be written into new files or input taken from some files. The operating system provides this service. The user does not have to worry about secondary storage management. User gives a command for reading or writing to a file and sees his her task accomplished. Thus operating system makes it easier for user programs to accomplish their task.This service involves secondary storage management. The speed of I/O that depends on secondary storage management is critical to the speed of many programs and hence I think it is best relegated to the operating systems to manage it than giving individual users the control of it. It is not difficult for the user-level programs to provide these services but for above mentioned reasons it is best if this service s left with operating system. Communications:-There are instances where processes need to communicate with each other to exchange information. It may be between processes running on the same computer or running on the different computers. By providing this service the operating system relieves the user of the worry of passing messages between processes. In case where the messages need to be passed to processes on the other computers through a network it can be done by the user programs. The user program may be customized to the specifics of the hardware through which the message transits and provides the service interface to the operating system.

HARSHIT(10CS032)

Error Detection:-An error is one part of the system may cause malfunctioning of the complete system. To avoid such a situation the operating system constantly monitors the system for detecting the errors. This relieves the user of the worry of errors propagating to various part of the system and causing malfunctioning.This service cannot allow to be handled by user programs because it involves monitoring and in cases altering area of memory or deallocation of memory for a faulty process. Or may be relinquishing the CPU of a process that goes into an infinite loop. These tasks are too critical to be handed over to the user programs. A user program if given these privileges can interfere with the correct (normal) operation of the operating systems. ADVANTAGES SYSTEM AND DISADVANTAGES OF OPERATING

Windows: 1.Can be expensive, especially compared to Linux, which is in most cases free. 2. Easy to use, especially for new computer users, and plenty of help resources are available online. 3. Although Microsoft Windows has made great improvements in reliability with recent versions, it still lags behind its competitors. 4. It has a large library of available software, games and utilities, although many are expensive. 5. Hardware manufacturers all make drivers and support for Windows OS. 6. Openness to virus attacks is a major disadvantage.

Linux: 1. It is an open source OS, which in most cases is free. 2. Inexperienced computer users may find it more difficult to get to grips with Linux. 3. Itis very reliable and rarely freezes. 4. Fewer computer programs, games and utilities are available for Linux. 5. Many programs are free or open source, even very complex ones. 6. There are still some manufacturers that do not offer hardware support for Linux OS, although there are fewer every year. 7. The open source nature of Linux allows more advanced users to customise the codeas they wish. 8. Fewer people use Linux, therefore it is more difficult to find someone fully familiar with it, although there are vast resources online. Mac OS: 1.MAC computers are more expensive generally than PCs. 2.MAC is a much more secure OS, and is far less open to viruses and malware. 3.Stability is a major advantage - it very rarely crashes, loses data or freezes. 4.Fewer computer programs and games are available for MACs. 5. As most computer components of MACs are made by Apple, there are not many driver issues, unlike with PCs, which are made by many different manufacturers MAC OS is not as customizable as Windows or Linux.

HOW DOES AN OPERATING SYSTEM WORK Introduction An operating system is the application that controls every aspect of a computer. The most common operating systems are Windows, UNIX and Macintosh. To put it simply, an operating system

HARSHIT(10CS032)

carries out two basic functions: (1) it serves as a manger for the hardware and software resources held in the system; and (2) it deals with hardware without the applications having to know every aspect along the way. The duties of the operating system fall into six different categories: processor management, memory management, device management, storage management, application interface and user interface. Processor Management: Processor management involves the certainty that all applications and processes get the appropriate amount of time from the processor so that it can function properly. It also involves taking advantage of as many processor cycles as it possibly can to make everything work together properly. The operating system uses the process or thread of the processor to carry out these functions and it continuously switches between processes at the rate of thousands of processes per second. MemoryManagement: Memory management is the process of ensuring that each process has the amount of memory needed to execute the task so that processes do not steal memory from each other. Another part of memory management is managing each type of memory so that it is usedproperly.

Device Management: Every piece of hardware uses a driver, a special program, to communicate with the system. The operating system uses the drivers as a translator between the electrical signals from the hardware and the programming code found in applications. The driver takes data from the operating system to the device and vice versa. The operating system controls this process by calling on the appropriate driver when it is needed. Application Program Interface: Just like hardware has drivers, applications have application program interfaces (APIs). APIs allow the programmers to use parts of the operating system and computer to carry out certain functions. The operating system holds all of the APIs that are recognizable to the computer and plays the role of interpreter for the APIs. It then sends the data required so that the function is carried out. User Interface: The user interface aspect of the operating system manages the interaction between the user and computer. Many operating systems use graphical user interfaces, which mean that it uses images and icons to communicate with the user. The operating system once again plays the role of interpreter to communicate with both the user and the computer in languages that they both understand. Operating System Boot Process.

Hard Disk and Partitions:Partitioning is a process of dividing the Hard disk into several chunks, and uses any one of the partition to install OS or use two or more partitions to install multiple OSes. But you can always have one partition, and use up the entire Hard disk space to install a single OS, but this will become data management nightmare for users of large Hard disks. Now, because of the structure of the Master Boot Record (MBR), you can have only four partitions, and these four partitions are called Primary Partitions. Again, if we have a large hard disk, we cannot have only four primary partitions,

HARSHIT(10CS032)
hence Extended Partition is introduced. This Extended Partition is not a usablepartition by itself, but its like a container and it is used to hold Logical Drives! That is this Extended Partition can be subdivided into multiple logical partitions. In order to boot into a Partition, it must be designated as bootable partition or Active Partition. Active Partition is that partition which is flagged as bootable or which contains OS, this is generally a Primary Partition.

Boot Records: Master, Partition, Extended, Logical Extended:Master Boot Record (MBR):- MBR is a small 512 bytes partition which is at the first physical sector of the hard disk. The location is denoted as CHS 0, 0, 1 meaning 0th Cylinder, 0th Head and 1st Sector. MBR contains a small program known as bootstrap program which is responsible for booting into any OSes. MBR also contains a table known as Partition Table. This Partition Table is a table which lists the available Primary Partitions in the hard disk. So it can have only four entries. This rises another question, what if we have more than four partitions? This is solved by Extended Partition principle. Partition Table considers whole Extended Partition as one Primary partition and lists it in the table! So a Partition Table can have two possible entries:1. Up to 4 Primary Partitions. 2. Up to 3 Primary Partitions and 1 Extended Partition.(Total not exceeding 4) . Partition Boot Sector (PBR):- This is the logical first sector that is sector at the start of a Primary Partition. This is also 512 byte area, which contains some programs to initialize or run OS files. All Primary Partitions have its own PBRs. Extended Boot Sector (EBR):- This is the logical first sector that is the sector at the start of the Extended Partition. This EBR contains a Partition Table, which lists the available Logical Partitions inside Extended Partition. That is it contains the Starting addresses of each Logical Partitions. Logical Extended Boot Sector (LEBR):- This is the logical first sector residing at the start of each Logical Partition. This is similar to PBR for Primary Partitions.

HARSHIT(10CS032)

Operating system architecture


There are 3 types of architecture: 1. Monolithic Architecture:

The monolithic approach is to define a high-level virtual interface over the hardware, with a set of primitives or system calls to implement operating system services such as process management, concurrency, and memory management in several modules that run in supervisor mode. When the implementation is complete and trustworthy, the tight internal integration of components allows the low-level features of the underlying system to be effectively utilized, making a good monolithic kernel highly efficient. Each component of the operating system was contained within the kernel, could communicate directly with any other component, and had unrestricted system access. While this made the operating system very efficient, it also meant that errors were more difficult to isolate, and there was a high risk of damage due to erroneous. 2. Layered Architecture:

In Layered architecture, each layer communicates only with the layers immediately above and below it, and lower-level layers provide services to higher-level ones using an interface that hides their implementation. The modularity of layered operating systems allows the implementation of each layer to be modified without requiring any modification to adjacent layers. Although this modular approach imposes structure and consistency on the operating system, simplifying debugging and modification, a service request from a user process may

HARSHIT(10CS032)

pass through many layers of system software before it is serviced and performance compares unfavourably to that of a monolithic kernel. Also because all layers still have unrestricted access to the system, the kernel is still susceptible to errant. 3. Micro kernel Achitecture:

A microkernel architecture includes only a very small number of services within the kernel in an attempt to keep it small and scalable. The services typically include low-level memory management, inter-process communication and basic process synchronisation to enable processes to cooperate. In microkernel designs, most operating system components, such as process management and device management, execute outside the kernel with a lower level of system access. Micro kernels are highly modular, making them extensible, portable and scalable. Operating system components outside the kernel can fail without causing the operating system to fall over.

HARSHIT(10CS032)

Experiment 2
Aim: To study unix operating system.
2.1 History of unix:-

UNIX is an operating system which was first developed in the 11969: at AT&T Bell Lab, and has been under constant development ever since. By operating system, we mean the suite of programs which make the computer work. It is a stable, multi-user, multi-tasking system. Some Unix systems have a graphical user interface (GUI).It is a machine independent language.It is monolithic and having simple user interface and consistant format usage. At the same time a team from the University of California at Berkeley was working to improve UNIX. In 1977 it released the first Berkeley Software Distribution, which became known as BSD. Over time this won favour through innovations such as the C shell. Meanwhile the AT&T version was developing in different ways. The 1978 release of Version 7 included the Bourne Shell for the first time. By 1983 commercial interest was growing and Sun Microsystems produced a UNIX workstation. System V appeared, directly descended from the original AT&T UNIX and the prototype of the more widely used variant today.

2.2System structure:Perhaps the most "important" or well known of the Unixutilitites is known as the Unix shell. The shell is the mechanism which allows users to enter commands to run other utility programs. There are several popular Unix shell programs which will be discussed later. What is important to keep in mind that the shell is merely another Unix utility program, which is typically loaded at login. The diagram below provides another visual representation of the organization of the Unix OS. At the core of the OS is the hardware, which is managed by the surrounding outer layer, the kernel. In the next outer layer come the utilities. Many of the utilities are system commands, but these can also be user written programs as shown by the a.out program. Finally in the outermost layer are other application programs which can be built on top of lower layer programs.

HARSHIT(10CS032)

As RAM has become plentiful and affordable, many of the frequently used utility programs have become "built-in" to the shell.

2.3 FILE SYSTEM:The Unix file system is a methodology for logically organizing and storing large quantities of data such that the system is easy to manage. A file can be informally defined as a collection of data, which can be logically viewed as a stream of bytes (i.e. characters). A file is the smallest unit of storage in the Unix file system. By contrast, a file system consists of files, relationships to other files, as well as the attributes of each file. File attributes are information relating to the file, but do not include the data contained within a file. File attributes for a generic operating system might include:

a file type (i.e. what kind of data is in the file) a file name (which may or may not include an extension) a physical file size a file owner file protection/privacy capability file time stamp (time and date created/modified)

Additionally, file systems provide tools which allow the manipulation of files, provide a logical organization as well as provide services which map the logical organization of files to physical devices. From the beginners perspective, the Unix file system is essentially composed of files and directories. Directories are special files that may contain other files. The Unix file system has a hierarchical (or tree-like) structure with its highest level directory called root (denoted by /, pronounced slash). Immediately below the root level directory are several subdirectories, most of which contain system files. Below this can exist system files, application files, and/or user data files. Similar to the concept of the process parent-child relationship, all files on a Unix system are related to one another. That is, files also have a parent-child existence. Thus, all files share a common parental link, the top-most file (i.e. /) being the exception. Below is a diagram of a "typical" Unix file system. As you can see, the top-most directory is / , with the directories directly beneath being system directories. Note that as Uniximplementaions and vendors vary, so will this file system hierarchy. However, the organization of most file systems is similar.

HARSHIT(10CS032)

While this diagram is not all inclusive, the following system files (i.e. directories) are present in most Unixfilesystems:

bin - short for binaries, this is the directory where many commonly used executable commands reside dev - contains device specific files etc - contains system configuration files home - contains user directories and files lib - contains all library files mnt - contains device files related to mounted devices proc - contains files related to system processes root - the root users' home directory (note this is different than /) sbin - system binary files reside here. If there is no sbin directory on your system, these files most likely reside in etc tmp - storage for temporary files which are periodically removed from the filesystem usr - also contains executable commands

FileTypes
TherearefourtypesoffilesintheUnixfilesystem.

2.1 OrdinaryFiles
Anordinaryfilemaycontain text,aprogram, orotherdata.ItcanbeeitheranASCII file,witheachofits bytesbeinginthenumericalrange0to127,i.e.inthe7-bitrange,orabinaryfile,whosebytescanbeofall possiblevalues0to255,inthe8-bitrange.

2.2 DirectoryFiles
Suppose thatinthedirectoryxIhavea,bandc,andthatbisadirectory, containingfilesuandv.Thenb can beviewednot onlyasadirectory,containingfurtherfiles,but alsoasafileitself.The filebconsistsof informationaboutthedirectoryb;i.e.thefilebhasinformationstatingthatthedirectorybhasfilesuandv, howlargetheyare,whentheywerelastmodified,etc.1

HARSHIT(10CS032)

2.3 DeviceFiles
InUnix,physicaldevices(printers,terminalsetc.)arerepresentedasfiles.Thisseems oddatfirst,butit reallymakessense:Thisway,thesameread()andwrite()functionsusedtoreadandwriterealfilescan alsobeusedtoreadfromandwritetothesedevices.

2.4 LinkFiles
SupposewehaveafileX,andtype lnXY Ifwethenrunls,itwillappearthatanewfile,Y,hasbeencreated,asacopyofX,asifwehadtyped cpXY However,thedifferenceisthecpdoescreateanewfile,whilelnmerelygivesanalternatenametoanold file.IfwemakeYusingln,thenYismerelyanewnameforthesamephysicalfileX.

3 SomeFileCommands
3.1 chmod
Youcanusethiscommand tochangetheaccesspermissionsofanyfileforwhichyouaretheowner.The notationusedis: uuser(i.e.owner) ggroup oothers + addpermission - removepermission rread wwrite xexecute Asanexample,thecommand chmodugo+rw.login wouldaddreadandwritepermissionforalluserstothe.loginfileofthepersonissuingthiscommand. Insomecasesitisusefulforausertodenyhimself/herselfpermissiontowritetoafile,e.g.tomakesure he/shedoesntremovethefilebymistake.

3.2 duanddf
Theducommanddisplaysthesizesinkilobytesofallfilesinthespecifieddirectory,andthetotalofall thosesizes;ifnodirectoryisspecified,thecurrentdirectoryisassumed. Thedfcommanddisplaystheamountofunusedspaceleftinyourdisksystems.

3.3 diff
Thiscommanddisplaysline-by-linedifferencesbetweentwoASCIIfiles. Ifforexample,youhavetwo versionsofaCsourcefilebutdontrememberhowthenewversiondiffersfromtheoldone,youcouldtype diffoldprog.cnewprog.c

HARSHIT(10CS032)

4 WildCards
Thesewillsaveyoualotoftyping! Therearetwowild-cardcharactersinUnix,*and?. Thewildcard*willmatcheswithanystringofcharacters.For example, rm*.c woulddeleteallfilesinthecurrentdirectorywhosenamesendwith.c. Thewildcard?willmatchwithanysinglecharacter.For example, rmx?b.c woulddeleteallfileswhosenamesconsistedoffivecharacters,thefirstofwhichwas ofwhichwereb.c.Example:rmprog?.c willdeleteallthefiles(inthecurrentdirectory)Thefilesx3b.c andxrb.cwouldbedeleted,whilethefilexuvb.cwouldnot. Inaddition, [0-9]matchescharacterfrom0through9 [az]matchescharacterfromathroughz Forinstance, rmtest[1-3].c wouldremovetest1.c,test2.candtest3.cbutnottest4.c. xand thelast three

2.4SERVICESOF OPERATING SYSTEM:Along with the fore mentioned functions, the operating system should also be able to provide the following basic services to the us

User Interface
All operating systems need to provide an interface to communicate with the user. This could be a Command Line Interface or a Graphical User Interface. A command line interface or CLI is a method of interacting with an operating system or software using a command line interpreter. This command line interpreter may be a text terminal, terminal emulator, or remote shell client. The concept of the CLI originated when teletype machines (TTY) were connected to computers in the 1950s, and offered results on demand, compared to batch oriented mechanical punch card input technology. Dedicated text-based CRT terminals followed, with faster interaction and more information visible at one time, and then graphical terminals enriched the visual display of information. Currently personal computers encapsulate both functions in software. A graphical user interface (GUI) is a type of user interface which allows people to interact with a computer and computer-controlled devices which employ graphical icons, visual indicators or special graphical elements called widgets,along with text, labels or text navigation to represent the information and actions available to a user. The actions are usually performed through direct manipulation of the graphical elements. Today, most modern operating systems contain GUIs. A few older operating systems tightly integrated the GUI to the kernelfor example, the original implementations of Microsoft Windows and Mac OS the Graphical subsystem was actually part of the operating system. More modern operating systems are modular, separating the graphics subsystem from the kernel (as is now done in Linux and Mac OS X) so that the graphics subsystem is not part of the OS at all.

HARSHIT(10CS032)

Program Execution
The system must be able to load a program into memory and to run that program, end execution, either normally or abnormally (indicating an error). This involves locating the executable file on the disk or other secondary storage media and loading its content into the memory. These steps may further include processing by another parser or interpreter as in the case of .NET Platform, in which each program is compiled to MSIL (Microsoft Intermediate Language, now called CIL or Common Intermediate Language) and then parsed to assembly upon execution by the .NET JIT (Just In Time Compiler).

Device Management
To perform useful functions, processes need access to the peripherals connected to the computer, which are controlled by the kernel through device drivers. For example, to show the user something on the screen, an application would make a request to the kernel, which would forward the request to its display driver, which is then responsible for actually plotting the character/pixel. In a plug and play system, a device manager first performs a scan on different hardware buses, such as Peripheral Component Interconnect (PCI) or Universal Serial Bus (USB), to detect installed devices, then searches for the appropriate drivers. As device management is a very OS-specific topic, these drivers are handled differently by each kind of kernel design, but in every case, the kernel has to provide the I/O to allow drivers to physically access their devices through some port or memory location. Very important decisions have to be made when designing the device management system, as in some designs accesses may involve context switches, making the operation very CPU-intensive and easily causing a significant performance overhead.

Resource Allocation and Accounting

When multiple users or multiple jobs running are concurrently on the operating system, resources must be allocated to each of them. Some may have special allocation code and rules, while others may have general request and release code. To keep track of which users use how much and what kinds of computer resources, the OS should also implement an Accounting scheme.

Communications
There are instances where processes need to communicate with each other to exchange information. It may be between processes running on the same computer or running on the different computers. By providing this service the operating system relieves the user of the worry of passing messages between processes. In case where the messages need to be passed to processes on the other computers through a network it can be done by the user programs. The user program may be customized to the specifics of the hardware through which the message transits and provides the service interface to the operating system.

HARSHIT(10CS032)

Error Detection
An error is one part of the system may cause malfunctioning of the complete system. To avoid such a situation the operating system constantly monitors the system for detecting the errors. This relieves the user of the worry of errors propagating to various part of the system and causing malfunctioning. This service cannot allowed to be handled by user programs because it involves monitoring and in cases altering area of memory or deallocation of memory for a faulty process. Or may be relinquishing the CPU of a process that goes into an infinite loop. These tasks are too critical to be handed over to the user programs. A user program if given these privileges can interfere with the correct normal operation of the operating systems.

2.4 Architecture of unix:-

A Unix architecture is a computer operating systemsystem architecture that embodies the Unix philosophy. It may adhere to standards such as the Single UNIX Specification (SUS) or similar POSIXIEEE standard.

HARSHIT(10CS032)

Kernel:

The kernel is the master program that provide file related activities, process scheduling, memory management, and various other operating system functions through system calls. In other words we can say that it control the resources of the computer system, allocate them to different users and different tasks. The major portion of the kernel is written in C language. Therefore, it is easy to understand, debug, and enhance it. As it is written in C language, therefore it is portable in nature. As you can see in the diagram that it is written or placed between hardware and utility program (Like shells, editors, vi or sed) so it work between the two. Moreover, the kernel maintains various data structure to manage the processes. Each process has its own priority. A higher priority process is execute first than the lower prority process. Kernel is divided into 2 parts:-

1. Process managnment
The primary task of the process management is to manage the memory management activities and process related activities at different states of the execution creation/deletion of processes, scheduling of the processes and provision of mechanism for synchronization, communication and deadlock handling of the processes.

2. File managnment
Where as the task of the file management is to manage the file related activities.SinceUnix is such kind of operating system which treats the I/O devices as a file. Therefore each I/O devices has its own file, known as device drivers, to derive it. The file management pat of the kernel handles these device drivers and store these files in the directory /dev under root directory. If we attach any new I/O device to the Unix than it is necessary to create a file for that device in /dev directory. Then we write down its characteristics; such as its type (character oriented or block oriented), address of the driver program, memory buffer reserved for the device and some other, in that specific file.

Shells:

In Unix we cannot directly deals with kernels. It is the shell, one of the utility program, that starts up the kernel when the user logs in. the shell sends a prompt symbol. The shell prompt waits for input from users. Now when you type a command and press Enter key, the shell obtained your command, execute it if possible and display the prompt symbol again in order to receive your next command. That iswhy the shell is also called as the Unix system command interpreter. Moreover, when we want to access the hardware, we will request for the shell, the shell will request to kernel, and finally the kernel will request to hardware. Basically the shell handles the user interaction with the system. Some built in commands are part of shell and the remaining commands are separated programs stored else where.There are three types of shells that are widely used and are exist in the Unix OS:

Bourne shell:
This shell was designed by Stephan Bourne of Bells Lab. It is most powerful an most widely used shell. The prompt symbol of Bourne shell is $ (Doller) sign.

The C Shell:
The C shell was developed at University of California. It is designed by Bill Joy. C shell gets its name from its programming language, which resembles the C programming language in syntax. The prompt symbol of C shell is Percent (%) sign.

Korn Shell:

HARSHIT(10CS032) Like Bourne shell, the Korn shell was also developed at Bells Lab of AT & T. This shell gets its name from its inventor David Korn of Bells Lab. Moreover, the shells provide the meta characters like *, [ ], ?and so on. For better searching of files.

Utility programs:
The Unix system contains large number of utility and application programs like editors (ed, ex, vi, sed) and so on. These utility programs and the application programs developed in Unix environment are also easily portable to another machine having same environment.

2.5

Advantages and disadvantages of Unix:-

Advantages
Full multitasking with protected memory. Multiple users can run multiple programs each at the same time without interfering with each other or crashing the system. Very efficient virtual memory, so many programs can run with a modest amount of physical memory. Access controls and security. All users must be authenticated by a valid account and password to use the system at all. All files are owned by particular accounts. The owner can decide whether others have read or write access to his files. A rich set of small commands and utilities that do specific tasks well -- not cluttered up with lots of special options. Unix is a well-stocked toolbox, not a giant do-it-all Swiss Army Knife. Ability to string commands and utilities together in unlimited ways to accomplish more complicated tasks -- not limited to preconfigured combinations or menus, as in personal computer systems. A powerfully unified file system. Everything is a file: data, programs, and all physical devices. Entire file system appears as a single large tree of nested directories, regardless of how many different physical devices (disks) are included. A lean kernel that does the basics for you but doesn't get in the way when you try to do the unusual. Available on a wide variety of machines - the most truly portable operating system. Optimized for program development, and thus for the unusual circumstances that are the rule in research. The traditional command line shell interface is user hostile -- designed for the programmer, not the casual user. Commands often have cryptic names and give very little response to tell the user what they are doing. Much use of special keyboard characters - little typos have unexpected results. To use Unix well, you need to understand some of the main design features. Its power comes from knowing how to make commands and programs interact with each other, not just from treating each as a fixed black box. Richness of utilities (over 400 standard ones) often overwhelms novices. Documentation is short on examples and tutorials to help you figure out how to use the many tools provided to accomplish variouskindsoftask.

Disadvantages

HARSHIT(10CS032)

EXPERIMENT-3 AIM: STUDY Of LINUX


Linux is a Unix-like computer operating system assembled under the model offree and open source software development and distribution. The defining component of Linux is the Linux kernel, an operating system kernel first released 5 October 1991 by Linus TorvaldsLinux was originally developed as a free operating system for Intel x86-based personal computers. It has since been ported to more computer hardware platforms than any other operating system. It is a leading operating system on servers and other big iron systems such as mainframe computers and supercomputersmore than 90% of today's 500 fastest supercomputers run some variant of Linux, including the 10 fastest. Linux also runs on embedded systems (devices where the operating system is typically built into the firmware and highly tailored to the system) such as mobile phones, tablet computers, network routers, televisionsand video game consoles; the Android system in wide use on mobile devices is built on the Linux kernel. The development of Linux is one of the most prominent examples of free and open source software collaboration: the underlying source code may be used, modified, and distributed commercially or non-commerciallyby anyone under licenses such as the GNU General Public License. Typically Linux is packaged in a format known as a Linux distribution for desktop and server use. Some popular mainstream Linux distributions include Debi an (and its derivatives such as Ubuntu), Fedora and openSUSE. Linux distributions include the Linux kernel, supporting utilities and libraries and usually a large amount of application software to fulfill the distribution's intended use. A distribution oriented toward desktop use will typically include the X Window System and an accompanying desktop environment such as GNOME or KDE Plasma. Some such distributions may include a less resource intensive desktop such as LXDE or Xfce for use on older or less powerful computers. A distribution intended to run as a server may omit all graphical environments from the standard install and instead include other software such as the Apache HTTP Server and an SSH server such as OpenSSH. Because Linux is freely redistributable, anyone may create a distribution for any intended use. Applications commonly used with desktop Linux systems include the Mozilla Firefox web browser, the LibreOffice office application suite, and the GIMP image editor. Since the main supporting user space system tools and libraries originated in the GNU Project, initiated in 1983 by Richard Stallman, the Free Software Foundation prefers the name GNU/Linux

HARSHIT(10CS032)

HISTORY Unix:-The Unix operating system was conceived and implemented in 1969 at AT&T's Bell Laboratories in the United States by Ken Thompson, Dennis Ritchie, Douglas McElroy, and Joe Ossanna. It was first released in 1971 and was initially entirely written in assembly language, a common practice at the time. Later, in a key pioneering approach in 1973, Unix was re-written in the programming language C by Dennis Ritchie (with exceptions to the kernel and I/O). The availability of an operating system written in a high-level language allowed easier portability to different computer platforms. With a legal glitch forcing AT&T to license the operating system's source code to anyone who asked, Unix quickly grew and became widely adopted by academic institutions and businesses. In 1984, AT&T divested itself of Bell Labs. Free of the legal glitch requiring free licensing, Bell Labs began selling Unix as a proprietary product.

GNU

Richard Stallman, founder of the GNU project The GNU Project, started in 1983 by Richard Stallman, had the goal of creating a "complete Unix-compatible software system" composed entirely of free software. Work began in 1984.[24] Later, in 1985, Stallman started the Free Software Foundation and wrote the GNU General Public License (GNU GPL) in 1989. By the early 1990s, many of the programs required in an operating system (such as libraries, compilers, text editors, a Unix shell, and a windowing system) were completed, although low-level elements such as device drivers, daemons, and the kernelwere stalled and incomplete. Linus Torvalds has said that if the GNU kernel had been available at the time (1991), he would not have decided to write his own. BSD:-Although not released until 1992 due to legal complications, development of 386BSD, from which NetBSD and FreeBSD descended, predated that of Linux. Linus Torvalds has said that if 386BSD had been available at the time, he probably would not have created Linux.

HARSHIT(10CS032)

MINIX:-MINIX is an inexpensive minimal Unix-like operating system, designed for education in computer science, written by Andrew S. Tanenbaum. Starting with version 3 in 2005, MINIX has become free and redesigned for "serious" use. In 1991 while attending the University of Helsinki, Torvalds became curious about operating systems and frustrated by the licensing of MINIX, which limited it to educational use only. He began to work on his own operating system which eventually became the Linux kernel. Torvalds began the development of the Linux kernel on MINIX, and applications written for MINIX were also used on Linux. Later Linux matured and further Linux development took place on Linux systems. GNU applications also replaced all MINIX components, because it was advantageous to use the freely available code from the GNU project with the fledgling operating system. (Code licensed under the GNU GPL can be reused in other projects as long as they also are released under the same or a compatible license.) Torvalds initiated a switch from his original license, which prohibited commercial redistribution, to the GNU GPL. Developers worked to integrate GNU components with Linux to make a fully functional and free operating system.

Commercial and popular uptake

Ubuntu, a popular Linux distribution

Today, Linux systems are used in every domain, from embedded systems to supercomputers, and have secured a place in server installations often using the popular LAMP application stack. Use of Linux distributions in home and enterprise desktops has been growing. They have also gained popularity with various local and national governments. The federal government of Brazil is well known for its support for Linux. News of the Russian military creating its own Linux distribution has also surfaced, and has come to fruition as the G.H.ost Project. The Indian state of Kerala has gone to the extent of mandating that all state high schools run Linux on their computers. China uses Linux exclusively as the operating system for its Loongson processor family to achieve technology independence. In Spain some regions have developed their own Linux distributions, which are widely used in education and official institutions, like gnuLinEx in Extremadura and Guadalinex in Andalusia.Portugal is also using its own Linux

HARSHIT(10CS032)

distribution CaixaMgica, used in the Magalhes netbook and the e-escola government program. France and Germany have also taken steps toward the adoption of Linux. Linux distributions have also become popular in the netbook market, with many devices such as the ASUS Eee PC and Acer Aspire Oneshipping with customized Linux distributions installed. Current development:-Torvalds continues to direct the development of the kernel.Stallman heads the Free Software Foundation, which in turn supports the GNU components. Finally, individuals and corporations develop third-party non-GNU components. These third-party components comprise a vast body of work and may include both kernel modules and user applications and libraries. Linux vendors and communities combine and distribute the kernel, GNU components, and non-GNU components, with additional package management software in the form of Linux distributions.

DESIGN:-A Linux-based system is a modular Unix-like operating system. It derives much of its basic design from principles established in UNIX during the 1970s and 1980s. Such a system uses a monolithic kernel, the Linux kernel, which handles process control, networking, and peripheral and file system access. Device drivers are either integrated directly with the kernel or added as modules loaded while the system is running. Separate projects that interface with the kernel provide much of the system's higher-level functionality. The GNU userland is an important part of most Linux-based systems, providing the most common implementation of the C library, a popular shell, and many of the common Unix tools which carry out many basic operating system tasks. The graphical user interface (or GUI) used by most Linux systems is built on top of an implementation of the X Window System.

USER INTERFACE:-Users operate a Linux-based system through a command line interface (CLI), a graphical user interface (GUI), or through controls attached to the associated hardware, which is common for embedded systems. For desktop systems, the default mode is usually a graphical user interface, by which the CLI is available through terminal emulator windows or on a separate virtual console. Most low-level Linux components, including the GNU userland, use the CLI exclusively. The CLI is particularly suited for automation of repetitive or delayed tasks, and provides very simple inter-process communication. A graphical terminal emulator program is often used to access the CLI from a Linux desktop. A Linux system typically implements a CLI by a shell, which is also the traditional way of interacting with a UNIX system. A Linux distribution specialized for servers may use the CLI as its only interface. On desktop systems, the most popular user interfaces are the extensive desktop environments KDE Plasma Desktop, GNOME, and Xfce, though a variety of additional user interfaces exist. Most popular user interfaces are based on the X Window System, often simply

HARSHIT(10CS032)

called "X". It provides network transparency and permits a graphical application running on one system to be displayed on another where a user may interact with the application. Other GUIs may be classified as simple X window managers, such as FVWM, Enlightenment, and Window Maker, which provide a minimalist functionality with respect to the desktop environments. A window manager provides a means to control the placement and appearance of individual application windows, and interacts with the X Window System. The desktop environments include window managers as part of their standard installations (Mutter for GNOME, KWin for KDE, Xfwm for Xfce as of January 2012) although users may choose to use a different window manager if preferred.

DEVLOPMENT:-The primary difference between Linux and many other popular contemporary operating systems is that the Linux kerneland other components are free and open source software. Linux is not the only such operating system, although it is by far the most widely used.Some free and open source software licenses are based on the principle of copyleft, a kind of reciprocity: any work derived from a copyleft piece of software must also be copyleft itself. The most common free software license, the GNU GPL, is a form of copyleft, and is used for the Linux kernel and many of the components from the GNU project. Linux based distributions are intended by developers for interoperability with other operating systems and established computing standards. Linux systems adhere to POSIX, SUS, ISO, and ANSI standards where possible, although to date only one Linux distribution has been POSIX.1 certified, Linux-FT. Free software projects, although developed in a collaborative fashion, are often produced independently of each other. The fact that the software licenses explicitly permit redistribution, however, provides a basis for larger scale projects that collect the software produced by standalone projects and make it available all at once in the form of a Linux distribution. A Linux distribution, commonly called a "distro", is a project that manages a remote collection of system software and application software packages available for download and installation through a network connection. This allows users to adapt the operating system to their specific needs. Distributions are maintained by individuals, loose-knit teams, volunteer organizations, and commercial entities. A distribution is responsible for the default configuration of the installed Linux kernel, general system security, and more generally integration of the different software packages into a coherent whole. Distributions typically use a package manager such as dpkg, Synaptic, YAST, or Portage to install, remove and update all of a system's software from one central location. Programming on Linux:-Most Linux distributions support dozens of programming languages. The original development tools used for building both Linux applications and operating system programs are found within the GNU tool chain, which includes the GNU Compiler Collection (GCC) and the GNU build system. Amongst others, GCC provides compilers for Ada, C, C++, Java, and Fortran. First released in 2003, the Low Level Virtual Machine project provides an alternative open-source compiler for many languages. Proprietary compilers for Linux include the Intel C++ Compiler, Sun Studio, and

HARSHIT(10CS032)

IBM XL C/C++ Compiler. BASIC in the form of Visual Basic is supported in such forms as Gambas, FreeBASIC, and XBasic, and in terms of terminal programming or QuickBASIC or Turbo BASIC programming in the form of QB64. Most distributions also include support for PHP, Perl, Ruby, Python and other dynamic languages. While not as common, Linux also supports C# (via Mono), Vala, and Scheme. A number of Java Virtual Machines and development kits run on Linux, including the original Sun Microsystems JVM (Hot Spot), and IBM's J2SE RE, as well as many open-source projects like Kaffe and JikesRVM. GNOME and KDE are popular desktop environments and provide a framework for developing applications. These projects are based on the GTK+ and Qt widget toolkits, respectively, which can also be used independently of the larger framework. Both support a wide variety of languages. There are a number of environments available including Anjuta, Code::Blocks, CodeLite, Eclipse, Geany, ActiveStateKomodo, KDevelop, Laz arus, MonoDevelop, NetBeans, Qt Creator and Omnis Studio, while the long-established editors Vim and Emacs remain popular

USES:-As well as those designed for general purpose use on desktops and servers, distributions may be specialized for different purposes including: computer architecture support, embedded, stability, security, localization to a specific region or language, targeting of specific user groups, support for real-time applications, or commitment to a given desktop environment. Furthermore, some distributions deliberately include only free software. Currently, over three hundred distributions are actively developed, with about a dozen distributions being most popular for general-purpose use. Linux is a widely ported operating system kernel. The Linux kernel runs on a highly diverse range of computer architectures: in the hand-held ARM-based iPAQ and the mainframeIBM System z9, System z10; in devices ranging from mobile phones to supercomputers.[65] Specialized distributions exist for less mainstream architectures. The ELKS kernel forkcan run on Intel 8086 or Intel 80286 16-bit microprocessors, while the Clinux kernel fork may run on systems without a memory management unit. The kernel also runs on architectures that were only ever intended to use a manufacturer-created operating system, such as Macintosh computers (with both PowerPC and Intel processors), PDAs, video game consoles, portable music players, and mobile phones. There are several industry associations and hardware conferences devoted to maintaining and improving support for diverse hardware under Linux, such as FreedomHEC.

HARSHIT(10CS032)

LINUX ARCHITECTURE Introduction to LINUX OS:-Linux is open source free software which is based on UNIX. It is a difficult and complicated OS. Therefore on the other hand it puts its entire control on the shoulders of the end user to rectify its code accordingly. The basic architecture of Linux is based on kernel. The first Linux kernel was developed in 1991. It is ported to many PC architectures. All the Linux code can be modified free of cost and the redistribution is done on the commercial and non-commercial ways by securing a license form GNU. Components of LINUX kernel:-Linux is based on monolithic kernel. It is able to perform monolithic multitasking in user as well as kernel mode. It is also able to support visual memory. It also provides the facility of shared libraries. It is capable of providing on demand loading. It is also able to do better memory management and threading. It supports shared copy on write executable and inter process communication. It is the architecture of Linux that users have adopted it successfully. Linux is able to perform multi-tasking in a way that it is translucent for the user processes. It seems at times that it is the only process running on the system memory by using main memory and some other useful hardware resources. There are five basic subsystems of Kernel which are process scheduler, memory manager, virtual file system, network interface and Inter process communication. The process scheduler allows and controls the process access to the central processing unit. Memory manager is there to guide the multi processes to make use of main memory in a secure manner. Virtual file system is responsible in making an abridgement of the details of the various hardware devices by in order to present the common file interface to possibly every device. Network interface is the one responsible for providing access to networking protocols and hardware. Inter process communication is complex task. As it is the process of handling variable mechanisms in order to support process to process communication on a one Linux system. The Kernel Software of LINUX OS:-The Linux kernel is efficient software. It is capable of performing multitasking. It contains virtual memory, shared libraries, demand loading and memory manager. It can also share copies on write executable. It is also able to do proper memory management and TCP/IP networking. Linux has a monolithic kernel. The kernel extensions and device drivers typically operate in a ring0. This helps in the full access to the hardware, though some run into user space. Unlike standard monolithic kernels, it is easy to configure as modules loaded and unloaded while the system is running. The monolithic kernel also allows the preemption of drivers. Preemption is also considered for resolving latency, improving the responsiveness of the system and makes the Linux operating system more suitable for the real time applications.

File System of LINUX :-The file system of Linux is based on single root directory or sub directories. Sub directories are usually used as mount points, where it is possible to combine various network files. The hardware commands are also incorporated into the file hierarchy. Device driver interface to the end user is also a part of this device driver.

HARSHIT(10CS032)

EXPERIMENT 4

AIM :- introduction to basic commands of linux


mkdir - make directories Usage mkdir [OPTION] DIRECTORY Options Create the DIRECTORY(ies), if they do not already exist. Mandatory arguments to long options are mandatory for short options too. -m, mode=MODE set permission mode (as in chmod), not rwxrwxrwx umask -p, parents no error if existing, make parent directories as needed -v, verbose print a message for each created directory -help display this help and exit -version output version information and exit cd - change directories Use cd to change directories. Type cd followed by the name of a directory to access that directory.Keep in mind that you are always in a directory and can navigate to directories hierarchically above or below. mv- change the name of a directory Type mv followed by the current name of a directory and the new name of the directory. Ex: mv testdirnewnamedir pwd - print working directory will show you the full path to the directory you are currently in. This is very handy to use, especially when performing some of the other commands on this page

HARSHIT(10CS032)
rmdir - Remove an existing directory rm -r Removes directories and files within the directories recursively. chown - change file owner and group Usage chown [OPTION] OWNER[:[GROUP]] FILE chown [OPTION] :GROUP FILE chown [OPTION] --reference=RFILE FILE Options Change the owner and/or group of each FILE to OWNER and/or GROUP. With --reference, change the owner and group of each FILE to those of RFILE. -c, changes like verbose but report only when a change is made -dereference affect the referent of each symbolic link, rather than the symbolic link itself -h, no-dereference affect each symbolic link instead of any referenced file (useful only on systems that can change the ownership of a symlink) -from=CURRENT_OWNER:CURRENT_GROUP change the owner and/or group of each file only if its current owner and/or group match those specified here. Either may be omitted, in which case a match is not required for the omitted attribute. -no-preserve-root do not treat `/' specially (the default) -preserve-root fail to operate recursively on `/' -f, -silent, -quiet suppress most error messages -reference=RFILE use RFILE's owner and group rather than the specifying OWNER:GROUP values -R, -recursive operate on files and directories recursively -v, -verbose output a diagnostic for every file processed The following options modify how a hierarchy is traversed when the -R option is also specified. If more than one is specified, only the final one takes effect. -H if a command line argument is a symbolic link to a directory, traverse it

HARSHIT(10CS032)
-L traverse every symbolic link to a directory encountered -P do not traverse any symbolic links (default) chmod - change file access permissions Usage chmod [-r] permissions filenames r Change the permission on files that are in the subdirectories of the directory that you are currently in. permission Specifies the rights that are being granted. Below is the different rights that you can grant in an alpha numeric format.filenames File or directory that you are associating the rights with Permissions u - User who owns the file. g - Group that owns the file. o - Other. a - All. r - Read the file. w - Write or edit the file. x - Execute or run the file as a program. Numeric Permissions: CHMOD can also to attributed by using Numeric Permissions: 400 read by owner 040 read by group 004 read by anybody (other) 200 write by owner 020 write by group 002 write by anybody 100 execute by owner 010 execute by group 001 execute by anybody ls - Short listing of directory contents -a list hidden files

HARSHIT(10CS032)
-d list the name of the current directory -F show directories with a trailing '/' executable files with a trailing '*' -g show group ownership of file in long listing -i print the inode number of each file -l long listing giving details about files and directories -R list all subdirectories encountered -t sort by time modified instead of name cp - Copy files cpmyfileyourfile Copy the files "myfile" to the file "yourfile" in the current working directory. This command will create the file "yourfile" if it doesn't exist. It will normally overwrite it without warning if it exists. cp -i myfileyourfile With the "-i" option, if the file "yourfile" exists, you will be prompted before it is overwritten. cp -i /data/myfile Copy the file "/data/myfile" to the current working directory and name it "myfile". Prompt before overwriting the file. cp -dprsrcdirdestdir Copy all files from the directory "srcdir" to the directory "destdir" preserving links (-poption), file attributes (-p option), and copy recursively (-r option). With these options, a directory and all it contents can be copied to another dir ln - Creates a symbolic link to a file. ln -s test symlink Creates a symbolic link named symlink that points to the file test Typing "ls -i test symlink" will show the two files are different with different inodes. Typing "ls -l test symlink" will show that symlink points to the file test. locate- A fast database driven file locator. slocate -u This command builds the slocate database. It will take several minutes to complete this command.This command must be used before searching

HARSHIT(10CS032)
for files, however cron runs this command periodically on most systems.locatewhereis Lists all files whose names contain the string "whereis". directory. more - Allows file contents or piped output to be sent to the screen one page at a time less - Opposite of the more command cat- Sends file contents to standard output. This is a way to list the contents of short files to the screen. It works well with piping. whereis - Report all known instances of a command wc - Print byte, word, and line counts bg bg jobs Places the current job (or, by using the alternative form, the specified jobs) in the background, suspending its execution so that a new user prompt appears immediately. Use the jobs command to discover the identities of background jobs. cal month year- Prints a calendar for the specified month of the specified year. cat files- Prints the contents of the specified files. clear - Clears the terminal screen. cmp file1 file2 - Compares two files, reporting all discrepancies. Similar to the diff command, though the output format differs. diff file1 file2- Compares two files, reporting all discrepancies. Similar to the cmp command, though the output format differs. dmesg - Prints the messages resulting from the most recent system boot. fg fg jobs - Brings the current job (or the specified jobs) to the foreground. file files - Determines and prints a description of the type of each specified file. find path -name pattern -print Searches the specified path for files with names matching the specified pattern (usually enclosed in single quotes) and prints their names. The find command has many other arguments and functions; see the online documentation. finger users - Prints descriptions of the specified users.

HARSHIT(10CS032)
free - Displays the amount of used and free system memory. ftp hostname Opens an FTP connection to the specified host, allowing files to be transferred. The FTP program provides subcommands for accomplishing file transfers; see the online documentation. head files - Prints the first several lines of each specified file. ispell files - Checks the spelling of the contents of the specified files. kill process_ids kill - signal process_ids kill -l Kills the specified processes, sends the specified processes the specified signal (given as a number or name), or prints a list of available signals. killall program killall - signal program Kills all processes that are instances of the specified program or sends the specified signal to all processes that are instances of the specified program. mail - Launches a simple mail client that permits sending and receiving email messages. man title man section title - Prints the specified man page. ping host- Sends an echo request via TCP/IP to the specified host. A response confirms that the host is operational. reboot- Reboots the system (requires root privileges). shutdown minutes shutdown -r minutes Shuts down the system after the specified number of minutes elapses (requires root privileges). The -r option causes the system to be rebooted once it has shut down. sleep time - Causes the command interpreter to pause for the specified number of seconds. sort files - Sorts the specified files. The command has many useful arguments; see the online documentation. split file - Splits a file into several smaller files. The command has many

HARSHIT(10CS032)
arguments; see the online documentation sync - Completes all pending input/output operations (requires root privileges). telnet host- Opens a login session on the specified host. top - Prints a display of system processes that's continually updated until the user presses the q key. traceroute host- Uses echo requests to determine and print a network path to the host. uptime - Prints the system uptime. w - Prints the current system users. wall - Prints a message to each user except those who've disabled message reception. Type Ctrl-D to end the message.

You might also like