How Is UNIX Different From Windows: Free Vs Paid

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

How Is UNIX Different From Windows

Free Vs Paid
The UNIX Operating System is Open Source which means everyone can use it, edit it, and pretty
much do what ever you want with it. Adopt new ideas; create hacks and a whole heap more. UNIX is
basically community orientated. Without the community backing it, it would probably not be nearly as
popular as it is today.

Because UNIX is community oriented, there are many different flavours of UNIX. This basically
means that users take the base of the UNIX kernel and adapt it to their own needs. Mac OS X is
even a flavour of UNIX. Some other flavours include:

o FreeBSD
o Novell
o HP/UX
o Solaris
o Linux
o Red Hat
o Debian
o Ubuntu
o SuSE
On the other hand, Windows is not, it was coded and created by Microsoft. People are not able to
edit it, or change the code in any way.

GUI / Command Line


The main difference that many people will find is that Windows is purely GUI-based where as UNIX
is mostly know for its text-based GUI, however it does have a GUI like windows. Many System and
Network Administrators prefer to use the command-line in UNIX rather than the Graphical User
Interface as the command-line providers more functionality.

Files and File Structure


In Windows, there is a registry which contains system configuration information; files and folders. In
UNIX, everything is a file and folders are called directories. Since everything is a file, disks and
partitions are mounted as directories, devices appears as files in /dev and so are running processors
which appear in /proc.

In Windows, the main folders are C:/Windows, C:/Program Files, C:/Users (for Windows Vista, 7 or
C:/Document and Settings for Windows XP). In Unix, the file system layout is very different.

o /boot – Where the boot image files are stored


o /mnt – The mount points for the partitions
o /dev – Files of all the connected devices (USB, Printers)
o /proc – Dynamic process information
o /sys – Dynamic system configuration information
o /var – Log files and system subdirectories
o /tmp & /spool – Temporary files
o /home – User accounts home directories
o /usr & /user/local – A replicate tree for users and contains administrative tools
o /bin & /sbin – Essential system executable files
o /etc – System configuration files
o /share – Common read only files
o /lib & /include – Shared library files and system development

Comparisons
Here are a list of some other comparisons between UNIX and Windows

o Scrips Vs .BAT files


o Shells Vs DOS Command Windows
o /etc config files Vs System Registry
o Shared library’s Vs DLL’s – Dynamic Link Libraries
o Kill Vs Task Manager
o Mkfs / newfs Vs Format and label

Source: MyBestRatedWebHosting
So that is a very short comparison between Windows and UNIX. Just remember, with UNIX, take
things slowly, because unlike Windows, you will not be prompted “Are you sure you want to do this”

Unix and Windows: Two Major Classes of Operating Systems

And they have a competitive history and future. Unix has been in use for more than
three decades. Originally it rose from the ashes of a failed attempt in the early 1960s to
develop a reliable timesharing operating system. A few survivors from Bell Labs did not
give up and developed a system that provided a work environment described as "of
unusual simplicity, power, and elegance".
Since the 1980's Unix's main competitor Windows has gained popularity due to the
increasing power of microcomputers with Intel-compatible processors. Windows, at the
time, was the only major OS designed for this type of processors. In recent years,
however, a new version of Unix called Linux, also specifically developed for
microcomputers, has emerged.

It can be obtained for free and is, therefore, a lucrative choice for individuals and
businesses.

On the server front, Unix has been closing in on Microsoft’s market share. In 1999,
Linux scooted past Novell's Netware to become the No. 2 server operating system
behind Windows NT. In 2001 the market share for the Linux operating system was 25
percent; other Unix flavors 12 percent.

On the client front, Microsoft is currently dominating the operating system market with
over 90% market share.

Because of Microsoft’s aggressive marketing practices, millions of users who have no


idea what an operating system is have been using Windows operating systems given to
them when they purchased their PCs. Many others are not aware that there are
operating systems other than Windows. But you are here reading an article about
operating systems, which probably means that you are trying to make conscious OS
decisions for home use or for your organizations. In that case, you should at least give
Linux/Unix your consideration, especially if the following is relevant in your environment.

Advantages of Unix

Unix is more flexible and can be installed on many different types of machines, including
mainframe computers, supercomputers, and micro-computers.

Unix is more stable and does not go down as often as Windows does, therefore
requires less administration and maintenance.

Unix has greater built-in security and permissions features than Windows.

Unix possesses much greater processing power than Windows.

Unix is the leader in serving the Web. About 90% of the Internet relies on Unix operating
systems running Apache, the world's most widely used Web server.

Software upgrades from Microsoft often require the user to purchase new or more
hardware or prerequisite software. That is not the case with Unix.

The mostly free or inexpensive open-source operating systems, such as Linux and
BSD, with their flexibility and control, are very attractive to (aspiring) computer wizards.
Many of the smartest programmers are developing state-of-the-art software free of
charge for the fast growing "open-source movement”.

Unix also inspires novel approaches to software design, such as solving problems by
interconnecting simpler tools instead of creating large monolithic application programs.

Remember, no one single type of operating system can offer universal answers to all
your computing needs. It is about having choices and making educated decisions

3.2 Differences Between Unix and Windows


Unix and Windows use completely different paradigms for run-time loading of code.
Before you try to build a module that can be dynamically loaded, be aware of how
your system works.

In Unix, a shared object (.so) file contains code to be used by the program, and also
the names of functions and data that it expects to find in the program. When the file is
joined to the program, all references to those functions and data in the file's code are
changed to point to the actual locations in the program where the functions and data
are placed in memory. This is basically a link operation.

In Windows, a dynamic-link library (.dll) file has no dangling references. Instead, an


access to functions or data goes through a lookup table. So the DLL code does not
have to be fixed up at runtime to refer to the program's memory; instead, the code
already uses the DLL's lookup table, and the lookup table is modified at runtime to
point to the functions and data.

In Unix, there is only one type of library file (.a) which contains code from several
object files (.o). During the link step to create a shared object file (.so), the linker may
find that it doesn't know where an identifier is defined. The linker will look for it in
the object files in the libraries; if it finds it, it will include all the code from that object
file.

In Windows, there are two types of library, a static library and an import library (both
called .lib). A static library is like a Unix .a file; it contains code to be included as
necessary. An import library is basically used only to reassure the linker that a certain
identifier is legal, and will be present in the program when the DLL is loaded. So the
linker uses the information from the import library to build the lookup table for using
identifiers that are not included in the DLL. When an application or a DLL is linked,
an import library may be generated, which will need to be used for all future DLLs
that depend on the symbols in the application or DLL.

Suppose you are building two dynamic-load modules, B and C, which should share
another block of code A. On Unix, you would not pass A.a to the linker
for B.so and C.so; that would cause it to be included twice, so that B and C would
each have their own copy. In Windows, building A.dll will also build A.lib.
You do pass A.lib to the linker for B and C. A.lib does not contain code; it just
contains information which will be used at runtime to access A's code.

In Windows, using an import library is sort of like using "import spam"; it gives you
access to spam's names, but does not create a separate copy. On Unix, linking with a
library is more like "from spam import *"; it does create a separate copy.

SYSTEM SOFTWARE

System software is a type of computer program that is designed to run a


computer’s hardware and application programs. If we think of the computer
system as a layered model, the system software is the interface between the
hardware and user applications.

The operating system (OS) is the best-known example of system software.


The OS manages all the other programs in a computer.

System software is software on a computer that is designed to


control and work with computer hardware. The two main types of
system software are the operating system and the software installed
with the operating system, often called utility software. The operating
system and utility software typically depend on each other to function
properly.
Some system software is used directly by users and other system software
works in the background. System software can allow users to interact
directly with hardware functionality, like the Device Manager and many of
the utilities found in the Control Panel.
Software that allows users to create documents (e.g. Microsoft Word), edit
pictures (e.g. Adobe Photoshop), browse the Internet (e.g. Microsoft
Internet Explorer), or check their e-mail (e.g. Microsoft Outlook) are
considered application software. System software does not involve direct
interaction with computer hardware or operating system functionality, but
may require the use of one or more hardware components to function
properly.

System software is a platform comprised of Operating System (OS) programs and services,
including settings and preferences, file libraries and functions used for system applications.
System software also includes device drivers that run basic computer hardware and peripherals.

APPLICATION SOFTWARE

An application program (app or application for short) is a computer program designed to perform
a group of coordinated functions, tasks, or activities for the benefit of the user. Examples of an
application include a word processor, a spreadsheet, an accounting application, a web browser,
a media player, an aeronautical flight simulator, a console game or a photo editor. The collective
noun application softwarerefers to all applications collectively.[1] This contrasts with system
software, which is mainly involved with running the computer.
Applications may be bundled with the computer and its system software or published separately, and
may be coded as proprietary, open-source or university projects.[2] Apps built for mobile platforms are
called mobile apps.
Application software is a program or group of programs designed for end users. These
programs are divided into two classes: system software and application software. While
system software consists of low-level programs that interact with computers at a basic
level, application software resides above system software and includes applications
such as database programs, word processors and spreadsheets. Application software
may be grouped along with system software or published alone.
Application software may simply be referred to as an application.
Local Area Network (LAN)

A Local Area Network (LAN) is a network that is restricted to smaller


physical areas e.g. a local office, school, or house. Approximately all current
LANs whether wired or wireless are based on Ethernet. On a ‘Local Area
Network’ data transfer speeds are higher than WAN and MAN that can
extend to a 10.0 Mbps (Ethernet network) and 1.0 Gbps (Gigabit Ethernet).

LAN networks can be implemented in multiple ways, for example twisted


pair cables and a wireless Wi-Fi with the IEEE 802.11 standard can be used
for this purpose. One end of the twisted pair cable is plugged into switches
using ‘RJ-45 connectors’ whereas the other end is plugged to a computer or
in another network. All new routers use the b/g/n IEEE 802.11 standards.
The ‘b’ and ‘g’ operate in the 2.4 GHz spectrum, and ‘n’ operates in 2.4 and
5.0 GHz which allows better performance and less interference.

Computers and servers (provides services to other computers like printing,


file storage and sharing) can connect to each other via cables or wirelessly in
a same LAN. Wireless access in conjunction with wired network is made
possible by Wireless Access Point (WAP). Devices with WAP functionality
provide a bridge between computers and networks. A WAP is able to connect
hundreds or even more of wireless users to a network. Servers in a LAN are
mostly connected by a wire since it is still the fastest medium for network
communication. But for workstations (Desktop, laptops, etc.) wireless
medium is a more suitable choice, since at some point it is difficult and
expensive to add new workstations into an existing system already having
complex network wiring.

Token Ring and Fiber Distributed Data Interface (FDDI)

With Ethernet, ‘Token Ring’ and ‘Fiber Distributed Data Interface (FDDI)’
are also considered the major ‘Local Area Network’ technologies. In Token
Ring network all computers are connected in a ring or star topology for
prevention of data collision and with a data transfer rates of either 4 or 16
megabits per second by IEEE 802.5 standard version. In FDDI for data
transmission optic fiber are used that extend the range of a LAN up to
200km and supports thousands of user.

New to networking? See this beginner’s course at Udemy.com

Wide Area Network (WAN)


Wide Area Network is a computer network that covers relatively larger
geographical area such as a state, province or country. It provides a solution
to companies or organizations operating from distant geographical locations
who want to communicate with each other for sharing and managing central
data or for general communication.

WAN is made up of two or more Local Area Networks (LANs) or


Metropolitan Area Networks (MANs) that are interconnected with each
other, thus users and computers in one location can communicate with users
and computers in other locations.

In ‘Wide Area Network’, Computers are connected through public networks,


such as the telephone systems, fiber-optic cables, and satellite links or leased
lines. The ‘Internet’ is the largest WAN in a world. WANs are mostly private
and arebuild for a particular organization by ‘Internet Service Providers
(ISPs)’ which connects the LAN of the organization to the internet. WANs are
frequently built using expensive leased lines where with each end of the
leased line a router is connected to extend the network capability across sites.
For low cost solutions, WAP is also built using a ‘circuit switching’ or ‘packet
switching’ methods.

Metropolitan Area Network (MAN)

A Metropolitan Area Network (MAN) is a network that connects two or more


computers, communicating devices or networks in a single network that has
geographic area larger than that covered by even a large ‘Local Area Network’
but smaller than the region covered by a ‘Wide Area Network’. MANs are
mostly built for cities or towns to provide a high data connection and usually
owned by a single large organization.

A Metropolitan Area Networks bridges a number of ‘Local Area Networks’


with a fiber-optical links which act as a backbone, and provides services
similar to what Internet Service Provider (ISP) provide to Wide Area
Networks and the Internet.

Major technologies used in MAN networks are ‘Asynchronous Transfer Mode


(ATM)’, ‘Fiber Distributed Data Interface (FDDI)’ and ‘Switched Multi-
megabit Data Service (SMDS, a connectionless service)’. In most of the areas,
these technologies are used to replace the simple ‘Ethernet’ based
connections. MANs can bridge Local Area Networks without any cables by
using microwave, radio wireless communication or infra-red laser which
transmits data wirelessly.
‘Distributed Queue Dual Bus (DQDB)’ is the Metropolitan Area Network
(MAN) IEEE 802.6 standard for data communication. Using DQDB,
networks can extend up to 100km-160km and operate at speeds of 44 to
155Mbps.

Conclusion

LAN is a private network used in small offices or homes usually within 1km
range with high speed transfer data rate and fulltime service connectivity in
low cost. WAN covers a large geographical area for example, a country or a
continent. Its data transfer data is usually low as compared to LAN, but it is
compatible with a variety of access lines and has an advanced security. MAN
covers an area bigger than LAN within a city or town and serves as an ISP for
larger LAN. It uses optical fibers or wireless infrastructure to link the LANs
therefore, providing high speed regional resource sharing.
SDLC

The software development life cycle (SDLC) is a framework defining tasks performed at
each step in the software development process. SDLC is a structure followed by a
development team within the software organization. It consists of a detailed plan
describing how to develop, maintain and replace specific software. The life cycle defines
a methodology for improving the quality of software and the overall development
process.
The software development life cycle is also known as the software development
process.

Software Development Life Cycle (SDLC) is a process used by the software


industry to design, develop and test high quality softwares. The SDLC aims
to produce a high-quality software that meets or exceeds customer
expectations, reaches completion within times and cost estimates.

 SDLC is the acronym of Software Development Life Cycle.

 It is also called as Software Development Process.

 SDLC is a framework defining tasks performed at each step in the software


development process.

 ISO/IEC 12207 is an international standard for software life-cycle processes. It


aims to be the standard that defines all the tasks required for developing and
maintaining software.

What is SDLC?
SDLC is a process followed for a software project, within a software
organization. It consists of a detailed plan describing how to develop,
maintain, replace and alter or enhance specific software. The life cycle
defines a methodology for improving the quality of software and the overall
development process.

The following figure is a graphical representation of the various stages of a


typical SDLC.
A typical Software Development Life Cycle consists of the following stages −

Stage 1: Planning and Requirement Analysis


Requirement analysis is the most important and fundamental stage in
SDLC. It is performed by the senior members of the team with inputs from
the customer, the sales department, market surveys and domain experts in
the industry. This information is then used to plan the basic project
approach and to conduct product feasibility study in the economical,
operational and technical areas.

Planning for the quality assurance requirements and identification of the


risks associated with the project is also done in the planning stage. The
outcome of the technical feasibility study is to define the various technical
approaches that can be followed to implement the project successfully with
minimum risks.

Stage 2: Defining Requirements


Once the requirement analysis is done the next step is to clearly define and
document the product requirements and get them approved from the
customer or the market analysts. This is done through an SRS (Software
Requirement Specification) document which consists of all the product
requirements to be designed and developed during the project life cycle.

Stage 3: Designing the Product Architecture


SRS is the reference for product architects to come out with the best
architecture for the product to be developed. Based on the requirements
specified in SRS, usually more than one design approach for the product
architecture is proposed and documented in a DDS - Design Document
Specification.

This DDS is reviewed by all the important stakeholders and based on


various parameters as risk assessment, product robustness, design
modularity, budget and time constraints, the best design approach is
selected for the product.

A design approach clearly defines all the architectural modules of the


product along with its communication and data flow representation with the
external and third party modules (if any). The internal design of all the
modules of the proposed architecture should be clearly defined with the
minutest of the details in DDS.

Stage 4: Building or Developing the Product


In this stage of SDLC the actual development starts and the product is built.
The programming code is generated as per DDS during this stage. If the
design is performed in a detailed and organized manner, code generation
can be accomplished without much hassle.

Developers must follow the coding guidelines defined by their organization


and programming tools like compilers, interpreters, debuggers, etc. are
used to generate the code. Different high level programming languages
such as C, C++, Pascal, Java and PHP are used for coding. The
programming language is chosen with respect to the type of software being
developed.
Stage 5: Testing the Product
This stage is usually a subset of all the stages as in the modern SDLC
models, the testing activities are mostly involved in all the stages of SDLC.
However, this stage refers to the testing only stage of the product where
product defects are reported, tracked, fixed and retested, until the product
reaches the quality standards defined in the SRS.

Stage 6: Deployment in the Market and Maintenance


Once the product is tested and ready to be deployed it is released formally
in the appropriate market. Sometimes product deployment happens in
stages as per the business strategy of that organization. The product may
first be released in a limited segment and tested in the real business
environment (UAT- User acceptance testing).

Then based on the feedback, the product may be released as it is or with


suggested enhancements in the targeting market segment. After the
product is released in the market, its maintenance is done for the existing
customer base.

SDLC Models
There are various software development life cycle models defined and
designed which are followed during the software development process.
These models are also referred as Software Development Process Models".
Each process model follows a Series of steps unique to its type to ensure
success in the process of software development.

Following are the most important and popular SDLC models followed in the
industry &miuns;

 Waterfall Model

 Iterative Model

 Spiral Model

 V-Model

 Big Bang Model


Other related methodologies are Agile Model, RAD Model, Rapid Application
Development and Prototyping Models.

You might also like