Professional Documents
Culture Documents
Information and Communication Technology: Presented By: Jaspreet Kaur
Information and Communication Technology: Presented By: Jaspreet Kaur
Lecture:
Presented By: Jaspreet Kaur
About me:
PhD Scholar at CSE Department in IITJ, Faculty at SPUP
#Reseacher#Blogger#Faculty#Consultant
Mail id: kaur.3@iitj.ac.in
OSN: LinkedIn, Facebook, Researchgate, Twitter
What is Information?
What is Data/Information?
Binary
Hexadecimal
ASCII Code
Xml
Data Communication
Offline/Online Data
LAN. MAN,WAN
Network protocols and structure
Information Security
Security policies
Audits
Standards
Laws
Risk assessments and evaluation method
Comparison of manual and electronic
storage of data
Manual Storage System Electronic Storage System
1. Digital Computers
2. Analog Computers
3. Hybrid Computers
Digital Computer
1. Smart phones
2. Desktops
3. Laptops
4. Tablets
Analog Computers:
Analogue Computers are designed to process the analogue data.
Analogue data is continuous data that changes continuously and
cannot have discrete values such as speed, temperature, pressure and
current.
Hybrid Computers:
Hybrid Computer has features of both analogue and digital computer.
It is fast like analogue computer and has memory and accuracy like
digital computers. It can process both continuous and discrete data.
Eg: Real time systems
Components of Digital Computer
Data Processing
Data in its raw form is not useful to any organization. Data processing is the
method of collecting raw data and translating it into usable information. It is
usually performed in a step-by-step process by a team of data
scientists and data engineers in an organization. The raw data is collected,
filtered, sorted, processed, analyzed, stored and then presented in a readable
format.
Step 2: Preparation
Data preparation or data cleaning is the process of sorting and filtering the raw data to
remove unnecessary and inaccurate data. Raw data is checked for errors, duplication,
miscalculations or missing data, and transformed into a suitable form for further analysis and
processing. This is done to ensure that only the highest quality data is fed into the processing
unit.
Step 3: Input
In this step, the raw data is converted into machine readable form and fed into
the processing unit. This can be in the form of data entry through a keyboard,
scanner or any other input source.
Step 6: Storage
The last step of the data processing cycle is storage, where data and metadata
is stored for further use. This allows for quick access and retrieval of information
whenever needed, and also allows it to be used as input in the next data
processing cycle directly.
Data Representation
Number System
The number system is simply a system to represent or express numbers. There are
various types of number systems and the most commonly used ones are
decimal number system, binary number system, octal number system, and
hexadecimal number system.
Decimal Value
102, 100.001
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,……….
Binary Values
0,1,10,11,100,101,110,……………………………
Octal Values
Hexadecimal Values
Computer Network
A network is a set of devices (often referred to as nodes) connected by
communication links. A node can be a computer, printer, or any other device
capable of sending and/or receiving data generated by other nodes on the
network.
I. Message: The message is the information (data) to be communicated.
Popular forms of information include text, numbers, pictures, audio, and
video.
2. Sender: The sender is the device that sends the data message. It can be a
computer, workstation, telephone handset, video camera, and so on.
3. Receiver: The receiver is the device that receives the message. It can be a
computer, workstation, telephone handset, television, and so on...
2. Delivery: The system must deliver data to the correct destination. Data
must be received by the intended device or user and only by that device
or user.
3. Accuracy: The system must deliver the data accurately. Data that have
been altered in transmission and left uncorrected are unusable.
4. Timeliness: The system must deliver data in a timely manner. Data delivered
late are useless.
5. Jitter: Jitter refers to the variation in the packet arrival time. It is the uneven
delay in the delivery of audio or video packets.
•LAN is used for connecting two or more personal computers through a communication
medium either wired or wireless.
•It is less costly as it is built with inexpensive hardware such as hubs, access points,
network adapters, and ethernet cables.
Wired Personal Area Network: Wired Personal Area Network is created by using the
USB.
MAN(Metropolitan Area Network)
• Peer-To-Peer network
• Client/Server network
Peer-To-Peer network
Network control: Complex network control features can be easily implemented in the star
topology. Any changes made in the star topology are automatically accommodated.
Limited failure: As each station is connected to the central hub with its own cable, therefore
failure in one cable will not affect the entire network.
Easily expandable: It is easily expandable as new stations can be added to the open ports
on the hub.
Cost effective: Star topology networks are cost-effective as it uses inexpensive coaxial cable.
High data speeds: It supports a bandwidth of approx 100Mbps. Ethernet 100BaseT is one of
the most popular Star topology networks.
Disadvantages of Star topology
Support for broadband transmission: Tree topology is mainly used to provide broadband transmission, i.e.,
signals are sent over long distances without being attenuated.
Easily expandable: We can add the new device to the existing network. Therefore, we can say that tree
topology is easily expandable.
Easily manageable: In tree topology, the whole network is divided into segments known as star networks
which can be easily managed and maintained.
Error detection: Error detection and error correction are very easy in a tree topology.
Limited failure: The breakdown in one station does not affect the entire network.
Point-to-point wiring: It has point-to-point wiring for individual segments.
Disadvantages of Tree topology
Reliable: The mesh topology networks are very reliable as if any link
breakdown will not affect the communication between connected
computers.
Fast Communication: Communication is very fast between the
nodes.
Easier Reconfiguration: Adding new devices would not disrupt the
communication between other devices.
Disadvantages of Mesh topology
Cost: A mesh topology contains a large number of connected
devices such as a router and more transmission media than other
topologies.
Management: Mesh topology networks are very large and very
difficult to maintain and manage. If the network is not monitored
carefully, then the communication link failure goes undetected.
Efficiency: In this topology, redundant connections are high that
reduces the efficiency of the network.
Hybrid Topology
The combination of various different topologies is known as Hybrid topology.
A Hybrid topology is a connection between different links and nodes to
transfer the data.
When two or more different topologies are combined together is termed as
Hybrid topology and if similar topologies are connected with each other will
not result in Hybrid topology. For example, if there exist a ring topology in
one branch of ICICI bank and bus topology in another branch of ICICI
bank, connecting these two topologies will result in Hybrid topology.
Advantages of Hybrid Topology
Reliable: If a fault occurs in any part of the network will not affect
the functioning of the rest of the network.
Scalable: Size of the network can be easily expanded by adding
new devices without affecting the functionality of the existing
network.
Flexible: This topology is very flexible as it can be designed
according to the requirements of the organization.
Effective: Hybrid topology is very effective as it can be designed in
such a way that the strength of the network is maximized and
weakness of the network is minimized.
Disadvantages of Hybrid topology
The way in which data is transmitted from one device to another device is
known as transmission mode. The transmission mode is also known as the
communication mode.
Simplex mode
Half-duplex mode
Full-duplex mode
Simplex mode
•In Simplex mode, the communication is unidirectional, i.e., the data flow in one
direction.
•A device can only send the data but cannot receive it or it can receive the data but
cannot send the data.
•The simplex mode is used in the business field as in sales that do not require any
corresponding reply.
•The radio station is a simplex channel as it transmits the signal to the listeners but
never allows them to transmit back.
•Keyboard and Monitor are the examples of the simplex mode as a keyboard can only
accept the data from the user and monitor can only be used to display the data on the
screen.
Advantage of Simplex mode:
In simplex mode, the station can utilize the entire bandwidth of the
communication channel, so that more data can be transmitted at
a time.
Disadvantage of Simplex mode:
Communication is unidirectional, so it has no inter-communication
between devices.
Half-Duplex mode
•In a Half-duplex channel, direction can be reversed, i.e., the station can transmit and receive the data
as well.
•Messages flow in both the directions, but not at the same time.
•The entire bandwidth of the communication channel is utilized in one direction at a time.
•In half-duplex mode, it is possible to perform the error detection, and if any error occurs, then the
receiver requests the sender to retransmit the data.
•A Walkie-talkie is an example of the Half-duplex mode. In Walkie-talkie, one party speaks, and
another party listens. After a pause, the other speaks and first party listens. Speaking simultaneously
will create the distorted sound which cannot be understood.
Advantage of Half-duplex mode:
In half-duplex mode, both the devices can send and receive the
data and also can utilize the entire bandwidth of the
communication channel during the transmission of data.
Disadvantage of Half-Duplex mode:
In half-duplex mode, when one device is sending the data, then
another has to wait, this causes the delay in sending the data at the
right time.
Full-duplex mode
In Full duplex mode, the communication is bi-directional, i.e., the data flow in both the
directions.
Both the stations can send and receive the message simultaneously.
Full-duplex mode has two simplex channels. One channel has traffic moving in one direction,
and another channel has traffic flowing in the opposite direction.
The Full-duplex mode is the fastest mode of communication between devices.
The most common example of the full-duplex mode is a telephone network. When two
people are communicating with each other by a telephone line, both can talk and listen at
the same time.
Advantage of Full-duplex mode:
Both the stations can send and receive the data at the same time.
Disadvantage of Full-duplex mode:
If there is no dedicated path exists between the devices, then the
capacity of the communication channel is divided into two parts.
Computer Network Components
Computer network components are the major parts which are needed to install the software.
Some important network components are NIC, switch, cable, hub, Bridge and router etc.
Depending on the type of network that we need to install, some network components can
also be removed. For example, the wireless network does not require a cable.
1. NIC: NIC stands for network interface card. NIC is a hardware component used to connect a
computer with another computer onto a network. It can support a transfer rate of 10,100 to 1000
Mb/s. The MAC address or physical address is encoded on the network card chip which is
assigned by the IEEE to identify a network card uniquely.
Wired NIC: The Wired NIC is present inside the motherboard. Cables and connectors are used with
wired NIC to transfer data.
Wireless NIC: The wireless NIC contains the antenna to obtain the connection over the wireless
network. For example, laptop computer contains the wireless NIC.
2.Cables:
Cable is a transmission media used for transmitting a signal. There are
mainly three types of cables used in transmission: Twisted pair cable,
Coaxial cable, Fibre-optic cable.
6. Switch:
A switch is a multiport bridge. A switch is a hardware device that connects multiple devices on a
computer network. A Switch contains more advanced features than Hub. The Switch contains the
updated table that decides where the data is transmitted or not. Switch delivers the message to the
correct destination based on the physical address present in the incoming message. A Switch does not
broadcast the message to the entire network like the Hub. It determines the device to whom the
message is to be transmitted. Therefore, we can say that switch provides a direct connection between
the source and destination. It increases the speed of the network.
7. Router
•A router is a hardware device which is used to connect a LAN with an internet connection. It is used to receive,
analyze and forward the incoming packets to another network.
•A router works in a Layer 3 (Network layer) of the OSI Reference model.
•A router forwards the packet based on the information available in the routing table.
•It determines the best path from the available paths for the transmission of the packet.
OSI Model
•OSI stands for Open System Interconnection is a reference model that describes
how information from a software application in one computer moves through a
physical medium to the software application in another computer.
•OSI consists of seven layers, and each layer performs a particular network
function.
•OSI model divides the whole task into seven smaller and manageable tasks.
Each layer is assigned a particular task.
2. Data-Link Layer: This layer is responsible for the error-free transfer of data frames. It defines the format
of the data on the network. It provides a reliable and efficient communication between two or more
devices. It is mainly responsible for the unique identification of each device that resides on a local
network.
4. Transport Layer: The Transport layer is a Layer 4 ensures that messages are transmitted in
the order in which they are sent and there is no duplication of data. The main responsibility
of the transport layer is to transfer the data completely. It receives the data from the upper
layer and converts them into smaller units known as segments. This layer can be termed as
an end-to-end layer as it provides a point-to-point connection between source and
destination to deliver the data reliably.
Functions of Transport Layer:
-Service-point addressing: Computers run several programs simultaneously due to this reason, the transmission of
data from source to the destination not only from one computer to another computer but also from one process to
another process. The transport layer adds the header that contains the address known as a service-point address or
port address. The responsibility of the network layer is to transmit the data from one computer to another computer
and the responsibility of the transport layer is to transmit the message to the correct process.
-Segmentation and reassembly: When the transport layer receives the message from the upper layer, it divides the
message into multiple segments, and each segment is assigned with a sequence number that uniquely identifies
each segment. When the message has arrived at the destination, then the transport layer reassembles the message
based on their sequence numbers.
-Connection control: Transport layer provides two services Connection-oriented service and connectionless service. A
connectionless service treats each segment as an individual packet, and they all travel in different routes to reach
the destination. A connection-oriented service makes a connection with the transport layer at the destination
machine before delivering the packets. In connection-oriented service, all the packets travel in the single route.
-Flow control: The transport layer also responsible for flow control but it is performed end-to-end rather than across a
single link.
-Error control: The transport layer is also responsible for Error control. Error control is performed end-to-end rather than
across the single link. The sender transport layer ensures that message reach at the destination without any error.
5. Session Layer: The Session layer is used to establish, maintain and synchronizes the interaction between
communicating devices.
Functions of Session layer:
-Dialog control: Session layer acts as a dialog controller that creates a dialog between two processes or we can say
that it allows the communication between two processes which can be either half-duplex or full-duplex.
-Synchronization: Session layer adds some checkpoints when transmitting the data in a sequence. If some error
occurs in the middle of the transmission of data, then the transmission will take place again from the checkpoint. This
process is known as Synchronization and recovery.
6. Presentation Layer: A Presentation layer is mainly concerned with the syntax and semantics of
the information exchanged between the two systems. It acts as a data translator for a network.
This layer is a part of the operating system that converts the data from one presentation format
to another format. The Presentation layer is also known as the syntax layer.
Functions of Presentation layer:
-Translation: The processes in two systems exchange the information in the form of character strings,
numbers and so on. Different computers use different encoding methods, the presentation layer
handles the interoperability between the different encoding methods. It converts the data from
sender-dependent format into a common format and changes the common format into receiver-
dependent format at the receiving end.
-Encryption: Encryption is needed to maintain privacy. Encryption is a process of converting the
sender-transmitted information into another form and sends the resulting message over the network.
-Compression: Data compression is a process of compressing the data, i.e., it reduces the number
of bits to be transmitted. Data compression is very important in multimedia such as text, audio,
video.
7. Application Layer: An application layer serves as a window for users and application
processes to access network service. It handles issues such as network transparency, resource
allocation, etc. An application layer is not an application, but it performs the application layer
functions. This layer provides the network services to the end-users.
Functions of Application layer:
• File transfer, access, and management (FTAM): An application layer allows
a user to access the files in a remote computer, to retrieve the files from a
computer and to manage the files in a remote computer.
• Mail services: An application layer provides the facility for email forwarding
and storage.
UDP: UDP (User Datagram Protocol) is a communications protocol that is primarily used
for establishing low-latency and loss-tolerating connections between applications on the
internet. It speeds up transmissions by enabling the transfer of data before an agreement
is provided by the receiving party(connection-less).
IP: IP stands for "Internet Protocol," which is the set of rules governing the format of data
sent via the internet or local network. In essence, IP addresses are the identifier that allows
information to be sent between devices on a network: they contain location information
and make devices accessible for communication.
IP Addressing
Communication between hosts can happen only if they can identify each other on the network.
In a single collision domain (where every packet sent on the segment by one host is heard by
every other host) hosts can communicate directly via MAC address.
MAC address is a factory coded 48-bits hardware address which can also uniquely identify a
host. But if a host wants to communicate with a remote host, i.e. not in the same segment or
logically not connected, then some means of addressing is required to identify the remote host
uniquely. A logical address is given to all hosts connected to Internet and this logical address is
called Internet Protocol Address.
Internet Protocol is one of the major protocols in the TCP/IP protocols suite. This protocol works at
the network layer of the OSI model and at the Internet layer of the TCP/IP model. Thus this
protocol has the responsibility of identifying hosts based upon their logical addresses and to
route data among them over the underlying network.
IP provides a mechanism to uniquely identify hosts by an IP addressing scheme. IP uses best
effort delivery, i.e. it does not guarantee that packets would be delivered to the destined host,
but it will do its best to reach the destination. Internet Protocol version 4 uses 32-bit logical
address.
Information system
information system, an integrated set of components for collecting, storing, and processing data and for providing information, knowledge,
and digital products. Business firms and other organizations rely on information systems to carry out and manage their operations, interact with
their customers and suppliers, and compete in the marketplace. Information systems are used to run interorganizational supply chains and
electronic markets. For instance, corporations use information systems to process financial accounts, to manage their human resources, and to
reach their potential customers with online promotions. Many major companies are built entirely around information systems. These
include eBay, a largely auction marketplace; Amazon, an expanding electronic mall and provider of cloud computing services; Alibaba, a
business-to-business e-marketplace; and Google, a search engine company that derives most of its revenue from keyword advertising
on Internet searches. Governments deploy information systems to provide services cost-effectively to citizens. Digital goods—such as electronic
books, video products, and software—and online services, such as gaming and social networking, are delivered with information systems.
Individuals rely on information systems, generally Internet-based, for conducting much of their personal lives: for socializing, study, shopping,
banking, and entertainment.
As major new technologies for recording and processing information were invented over the millennia, new capabilities appeared, and
people became empowered. The invention of the printing press by Johannes Gutenberg in the mid-15th century and the invention of a
mechanical calculator by Blaise Pascal in the 17th century are but two examples. These inventions led to a profound revolution in the ability to
record, process, disseminate, and reach for information and knowledge. This led, in turn, to even deeper changes in individual lives, business
organization, and human governance.
The first large-scale mechanical information system was Herman Hollerith’s census tabulator. Invented in time to process the 1890 U.S. census,
Hollerith’s machine represented a major step in automation, as well as an inspiration to develop computerized information systems.
One of the first computers used for such information processing was the UNIVAC I, installed at the U.S. Bureau of the Census in 1951 for
administrative use and at General Electric in 1954 for commercial use. Beginning in the late 1970s, personal computers brought some of the
advantages of information systems to small businesses and to individuals. Early in the same decade the Internet began its expansion as the
global network of networks. In 1991 the World Wide Web, invented by Tim Berners-Lee as a means to access the interlinked information stored
in the globally dispersed computers connected by the Internet, began operation and became the principal service delivered on the network.
The global penetration of the Internet and the Web has enabled access to information and other resources and facilitated the forming of
relationships among people and organizations on an unprecedented scale. The progress of electronic commerce over the Internet has
resulted in a dramatic growth in digital interpersonal communications (via e-mail and social networks), distribution of products (software, music,
e-books, and movies), and business transactions (buying, selling, and advertising on the Web). With the worldwide spread
of smartphones, tablets, laptops, and other computer-based mobile devices, all of which are connected by wireless communication networks,
information systems have been extended to support mobility as the natural human condition.
As information systems enabled more diverse human activities, they exerted a profound influence over society. These systems quickened the
pace of daily activities, enabled people to develop and maintain new and often more-rewarding relationships, affected the structure and mix
of organizations, changed the type of products bought, and influenced the nature of work. Information and knowledge became vital
economic resources. Yet, along with new opportunities, the dependence on information systems brought new threats. Intensive
industry innovation and academic research continually develop new opportunities while aiming to contain the threats.
Components of information systems
1. Computer hardware
2. Computer software
3. Telecommunications
4. Databases and data warehouses
5. Human resources and procedures
Types of information systems
Information systems support operations, knowledge work, and management in
organizations. (The overall structure of organizational information systems is shown in
the figure.) Functional information systems that support a specific organizational function,
such as marketing or production, have been supplanted in many cases by cross-
functional systems built to support complete business processes, such as order processing
or employee management. Such systems can be more effective in the development
and delivery of the firm’s products and can be evaluated more closely with respect to
the business outcomes. The information-system categories described here may
be implemented with a great variety of application programs.
structure of organizational information systemsInformation
systems consist of three layers: operational support, support of
knowledge work, and management support. Operational
support forms the base of an information system and contains
various transaction processing systems for designing, marketing,
producing, and delivering products and services. Support of
knowledge work forms the middle layer; it contains subsystems
for sharing information within an organization. Management
support, forming the top layer, contains subsystems for
managing and evaluating an organization's resources and
goals.
Operational support and enterprise systems
Transaction processing systems support the operations through
which products are designed, marketed, produced, and delivered.
In larger organizations, transaction processing is frequently
accomplished with large integrated systems known as enterprise
systems. In this case, the information systems that support various
functional units—sales and marketing, production, finance, and
human resources—are integrated into an enterprise resource
planning (ERP) system, the principal kind of enterprise system. ERP
systems support the value chain—that is, the entire sequence of
activities or processes through which a firm adds value to its
products. For example, an individual or another business may submit
a custom order over the Web that automatically initiates just-in-time
production to the customer’s specifications through an approach
known as mass customization. This involves sending orders from the
customers to the firm’s warehouses and perhaps to suppliers to
deliver input materials just in time for a batched custom production
run. Financial accounts are updated accordingly, and
delivery logistics and billing are initiated.
Along with helping to integrate a firm’s own value chain, transaction processing systems can also serve to
integrate the overall supply chain of which the organization is a part. This includes all firms involved in
designing, producing, marketing, and delivering the goods and services—from raw materials to the final
delivery of the product. A supply chain management (SCM) system manages the flow of products, data,
money, and information throughout the entire supply chain, which starts with the suppliers of raw materials,
runs through the intermediate tiers of the processing companies, and ends with the distributors and retailers.
For example, purchasing an item at a major retail store generates more than a cash register receipt: it also
automatically sends a restocking order to the appropriate supplier, which in turn may call for orders to the
supplier’s suppliers. With an SCM system, suppliers can also access a retailer’s inventory database over the
Web to schedule efficient and timely deliveries in appropriate quantities.
The third type of enterprise system, customer relationship management (CRM), supports dealing with the
company’s customers in marketing, sales, service, and new product development. A CRM system gives a
business a unified view of each customer and its dealings with that customer, enabling a consistent
and proactive relationship. In cocreation initiatives, the customers may be involved in the development of
the company’s new products.
Many transaction processing systems support electronic commerce over the Internet. Among these are
systems for online shopping, banking, and securities trading. Other systems deliver information, educational
services, and entertainment on demand. Yet other systems serve to support the search for products with
desired attributes (for example, keyword search on search engines), price discovery (via an auction, for
example), and delivery of digital products (such as software, music, movies, or greeting cards). Social
network sites, such as Facebook and LinkedIn, are a powerful tool for supporting customer communities and
individuals as they articulate opinions, evolve new ideas, and are exposed to promotional messages. A
growing array of specialized services and information-based products are offered by various organizations on
the Web, as an infrastructure for electronic commerce has emerged on a global scale.
Transaction processing systems accumulate the data in databases and data warehouses that are necessary
for the higher-level information systems. Enterprise systems also provide software modules needed to perform
many of these higher-level functions.
Support of knowledge work
A large proportion of work in an information society involves manipulating abstract information and knowledge (understood in this context as an
organized and comprehensive structure of facts, relationships, theories, and insights) rather than directly processing, manufacturing, or
delivering tangible materials. Such work is called knowledge work. Three general categories of information systems support such knowledge work:
professional support systems, collaboration systems, and knowledge management systems.
Professional support systems
Professional support systems offer the facilities needed to perform tasks specific to a given profession. For example, automotive engineers
use computer-aided engineering (CAE) software together with virtual reality systems to design and test new models as electronic prototypes for
fuel efficiency, handling, and passenger protection before producing physical prototypes, and later they use CAE in the design and analysis of
physical tests. Biochemists use specialized three-dimensional modeling software to visualize the molecular structure and probable effect of new
drugs before investing in lengthy clinical tests. Investment bankers often employ financial software to calculate the expected rewards and
potential risks of various investment strategies. Indeed, specialized support systems are now available for most professions.
Collaboration systems
The main objectives of collaboration systems are to facilitate communication and teamwork among the members of an organization and across
organizations. One type of collaboration system, known as a workflow system, is used to route relevant documents automatically to all
appropriate individuals for their contributions.
Development, pricing, and approval of a commercial insurance policy is a process that can benefit from such a system. Another category of
collaboration systems allows different individuals to work simultaneously on a shared project. Known as groupware, such systems accomplish this
by allowing controlled shared access, often over an intranet, to the work objects, such as business proposals, new designs, or digital products in
progress. The collaborators can be located anywhere in the world, and, in some multinational companies, work on a project continues 24 hours a
day.
Other types of collaboration systems include enhanced e-mail and videoconferencing systems, sometimes with telepresence using avatars of the
participants. Yet another type of collaboration software, known as wiki, enables multiple participants to add and edit content. (Some online
encyclopaedias are produced on such platforms.) Collaboration systems can also be established on social network platforms or virtual life
systems. In the open innovation initiative, members of the public, as well as existing and potential customers, can be drawn in, if desired, to
enable the cocreation of new products or projection of future outcomes.
Knowledge management systems
Knowledge management systems provide a means to assemble and act on the knowledge accumulated throughout an organization. Such
knowledge may include the texts and images contained in patents, design methods, best practices, competitor intelligence, and similar sources,
with the elaboration and commentary included. Placing the organization’s documents and communications in an indexed and cross-referenced
form enables rich search capabilities. Numerous application programs, such as Microsoft’s SharePoint, exist to facilitate the implementation of
such systems. Organizational knowledge is often tacit, rather than explicit, so these systems must also direct users to members of the organization
with special expertise.
Management support
A large category of information systems comprises those designed to support the management of an organization. These systems rely on the data obtained by transaction
processing systems, as well as on data and information acquired outside the organization (on the Web, for example) and provided by business partners, suppliers, and
customers.
Management reporting systems
Information systems support all levels of management, from those in charge of short-term schedules and budgets for small work groups to those concerned with long-term
plans and budgets for the entire organization. Management reporting systems provide routine, detailed, and voluminous information reports specific to each manager’s areas
of responsibility. These systems are typically used by first-level supervisors. Generally, such reports focus on past and present activities, rather than projecting future
performance. To prevent information overload, reports may be automatically sent only under exceptional circumstances or at the specific request of a manager.
Decision support systems and business intelligence
All information systems support decision making, however indirectly, but decision support systems are expressly designed for this purpose. As these systems are increasingly
being developed to analyze massive collections of data (known as big data), they are becoming known as business intelligence, or business analytics, applications. The two
principal varieties of decision support systems are model-driven and data-driven. In a model-driven decision support system, a preprogrammed model is applied to a relatively
limited data set, such as a sales database for the present quarter. During a typical session, an analyst or sales manager will conduct a dialog with this decision support system
by specifying a number of what-if scenarios. For example, in order to establish a selling price for a new product, the sales manager may use a marketing decision support
system. It contains a model relating various factors—the price of the product, the cost of goods, and the promotion expense in various media—to the projected sales volume
over the first five years on the market. By supplying different product prices to the model, the manager can compare predicted results and select the most profitable selling
price. The primary objective of data-driven business intelligence systems is to analyze large pools of data, accumulated over long periods of time in data warehouses, in a
process known as data mining. Data mining aims to discover significant patterns, such as sequences (buying a new house, followed by a new dinner table), clusters, and
correlations (large families and van sales), with which decisions can be made. Predictive analytics attempts to forecast future outcomes based on the discovered trends.
Data-driven decision support systems include a variety of statistical models and may rely on various artificial intelligence techniques, such as expert systems, neural networks,
and machine learning. In addition to mining numeric data, text mining is conducted on large aggregates of unstructured data, such as the contents of social media that
include social networks, wikis, blogs, and microblogs. As used in electronic commerce, for example, text mining helps in finding buying trends, targeting advertisements, and
detecting fraud. An important variety of decision support systems enables a group of decision makers to work together without necessarily being in the same place at the
same time. These group decision systems include software tools for brainstorming and reaching consensus.
Another category, geographic information systems, can help analyze and display data by using digitized maps. Digital mapping of various regions is a continuing activity of
numerous business firms. Such data visualization supports rapid decision making. By looking at a geographic distribution of mortgage loans, for example, one can easily
establish a pattern of discrimination.
Executive information systems
Executive information systems make a variety of critical information readily available in a highly summarized and convenient form, typically via a graphical digital dashboard.
Senior managers characteristically employ many informal sources of information, however, so that formal, computerized information systems are only of partial assistance.
Nevertheless, this assistance is important for the chief executive officer, senior and executive vice presidents, and the board of directors to monitor the performance of the
company, assess the business environment, and develop strategic directions for the future. In particular, these executives need to compare their organization’s performance
with that of its competitors and investigate general economic trends in regions or countries. Often individualized and relying on multiple media formats, executive information
systems give their users an opportunity to “drill down” from summary information to increasingly focused details.
Raw Data generation
The Advanced Research Projects Agency Network (ARPANET) was the first wide-area packet-switched
network with distributed control and one of the first networks to implement the TCP/IP protocol suite. Both
technologies became the technical foundation of the Internet. The ARPANET was established by
the Advanced Research Projects Agency (ARPA) of the United States Department of Defense.
It was first used in 1969 and finally decommissioned in 1989. ARPANET's main use was for academic and
research purposes.
Many of the protocols used by computer networks today were developed for ARPANET, and it is considered
the forerunner of the modern internet.
Developments leading to ARPANET
ARPANET and the subsequent computer networks leading to the internet were not the product of a single
individual or organization, nor were they formed at one time. Instead, the ideas and initial research work of
many people over years of time was used to form the basis of ARPANET and to build it to become the
forerunner of the internet.
In the 1960s, computers were large mainframe systems. They were very expensive and were only owned by
large companies, universities and governments. Users would sit at dedicated terminals, such
as teletype machines, and run programs on the connected mainframe. Connections between computers
was done over dedicated links. These systems were highly centralized and fault-prone.
This was during the height of the Cold War. The U.S. military was interested in creating
computer networks that could continue to function after having portions removed,
such as in the case of a nuclear strike. Similarly, universities were looking to develop a
network that could be fault-tolerant over unreliable connections and could be used
to share data and computing resources between users at different locations.
In the early 1960s, Paul Baran, working for the U.S. think tank Rand Corporation,
developed the concept of distributed adaptive message block switching. This would
enable small groups of data to be sent along differing paths to the destination. This
idea eventually became packet communication that underlies almost all data
communication today. At that time, though, it was not implemented.
Joseph C.R. Licklider became the director of ARPA's Information Processing
Techniques Office (IPTO) in 1962. He was a major proponent of human-computer
interaction and using computers to help people make better decisions. His influence
lead ARPA to develop its network and other innovations, such as graphical user
interfaces. In 1966, Robert (Bob) Taylor became the director of IPTO. He credits the
idea of ARPANET to the fact that he had three different computer terminals
connected to three mainframe computers in his office that he would need to move
between. This led to the obvious question: Why can't one terminal be used for any
computer?
History of ARPANET
Development of ARPANET began in 1966. Several standards were developed. Network Control Program (NCP) would handle communication
between hosts and could support the first commands, Telnet and File Transfer Protocol (FTP). It would use packet-switching technology to
communicate. Interface Message Processor was developed to pass messages between hosts. This can be considered the first
packet gateway or router. Hardware modems were designed and sent out to the participating organizations.
The first message sent over ARPANET happened on Oct. 29, 1969. Charley Kline, who was a student at the University of California Los Angeles
(UCLA), tried to log in to the mainframe at the Stanford Research Institute (SRI). He successfully typed in the characters L and O, but the computer
crashed when he typed the G of the command LOGIN. They were able to overcome the initial crash, however, and had a successful connection
that same day.
The first permanent connection between UCLA and SRI was put into place on Nov. 21, 1969. Two more universities joined ARPANET as founding
members on Dec. 5, 1969. These were the University of California, Santa Barbara and University of Utah School of Computing.
ARPANET grew rapidly in the early 1970s. Many universities and government computers joined the network during this time. In 1975, ARPANET was
declared operational and was used to develop further communications technology. In time, several computers in other countries were also
added using satellite links.
Many packet-based networks quickly came into operation after ARPANET became popular. These various networks could not communicate with
one another due to the requirements of standardized equipment in the existing networks. Therefore, TCP/IP was developed as a protocol to
enable communication between different networks. It was first put into operation in 1977.
TCP/IP enabled an interconnected network of networks and is the foundational technology of the internet. On Jan. 1, 1983, TCP/IP replaced NCP
as the underlying packet-switching technology of ARPANET.
Also, in 1983, ARPANET was divided into two networks between military and civilian use. The word internet was first used to describe the
combination of these two networks.
The importance of ARPANET diminished as other networks became more dominant in the mid-1980s. The National Science Foundation Network
replaced ARPANET as the backbone of the internet in 1986. Commercial and other network providers also began operating during this time.
ARPANET was shut down in 1989. It was finally decommissioned in 1990.
Legacy of ARPANET
ARPANET stands as a major changing point in the development of computer technology. Many underlying internet technologies were first
developed on or for ARPANET. Telnet and FTP protocols were some of the first used on ARPANET, and they are still in use today. TCP/IP was
developed on it. The first network email was sent in 1971 over ARPANET. It also hosted what is considered the first marketing spam email in 1978.
ARPANET also led to many other networking firsts. List servers, or listservs, became early social networks. Early voice communication protocols
were developed on it. Password protection and data encryption were developed for use over ARPANET.
USENET
1.1. Discussion groups:
• The Usenet is a huge worldwide collection of discussion groups. Each discussion group has a
name, e.g. comp.os.linux.announce, and a collection of messages. These messages, usually called articles, are posted by
readers like you and me who have access to Usenet servers, and are then stored on the Usenet servers.
• This ability to both read and write into a Usenet newsgroup makes the Usenet very different from the bulk of what people
today call ``the Internet.'' The Internet has become a colloquial term to refer to the World Wide Web, and the Web is
(largely) read-only. There are online discussion groups with Web interfaces, and there are mailing lists, but Usenet is
probably more convenient than either of these for most large discussion communities. This is because the articles get
replicated to your local Usenet server, thus allowing you to read and post articles without accessing the global Internet,
something which is of great value for those with slow Internet links. Usenet articles also conserve bandwidth because they
do not come and sit in each member's mailbox, unlike email based mailing lists. This way, twenty members of a mailing list
in one office will have twenty copies of each message copied to their mailboxes. However, with a Usenet discussion group
and a local Usenet server, there's just one copy of each article, and it does not fill up anyone's mailbox.
• Another nice feature of having your own local Usenet server is that articles stay on the server even after you've read them.
You can't accidentally delete a Usenet articles the way you can delete a message from your mailbox. This way, a Usenet
server is an excellent way to archive articles of a group discussion on a local server without placing the onus of archiving
on any group member. This makes local Usenet servers very valuable as archives of internal discussion messages within
corporate Intranets, provided the article expiry configuration of the Usenet server software has been set up for sufficiently
long expiry periods.
1.2. How it works, loosely speaking:
Usenet news works by the reader first firing up a Usenet news program, which in today's GUI world will highly likely be something like
Netscape Messenger or Microsoft's Outlook Express. There are a lot of proven, well-designed character-based Usenet news readers,
but a proper review of the user agent software is outside the scope of this HOWTO, so we will just assume that you are using whatever
software you like. The reader then selects a Usenet newsgroup from the hundreds or thousands of newsgroups which are hosted by
her local server, and accesses all unread articles. These articles are displayed to her. She can then decide to respond to some of
them.
When the reader writes an article, either in response to an existing one or as a start of a brand-new thread of discussion, her
software posts this article to the Usenet server. The article contains a list of newsgroups into which it is to be posted. Once it is
accepted by the server, it becomes available for other users to read and respond to. The article is automatically expired or deleted
by the server from its internal archives based on expiry policies set in its software; the author of the article usually can do little or
nothing to control the expiry of her articles.
A Usenet server rarely works on its own. It forms a part of a collection of servers, which automatically exchange articles with each
other. The flow of articles from one server to another is called a newsfeed. In a simplistic case, one can imagine a worldwide network
of servers, all configured to replicate articles with each other, busily passing along copies across the network as soon as one of them
receives a new articles posted by a human reader. This replication is done by powerful and fault-tolerant processes, and gives the
Usenet network its power. Your local Usenet server literally has a copy of all current articles in all relevant newsgroups.
1.3. About sizes, volumes, and so on
Any would-be Usenet server administrator or creator must read the "Periodic Posting about the basic steps involved in configuring a machine to store Usenet
news," also known as the Site Setup FAQ, available from ftp://rtfm.mit.edu/pub/usenet/news.answers/usenet/site-
setup or ftp://ftp.uu.net/usenet/news.answers/news/site-setup.Z. It was last updated in 1997, but trends haven't changed much since then, though absolute
volume figures have. If you want your Usenet server to be a repository for all articles in all newsgroups, you will probably not be reading this HOWTO, or even if you
do, you will rapidly realise that anyone who needs to read this HOWTO may not be ready to set up such a server. This is because the volumes of articles on the Usenet
have reached a point where very specialised networks, very high end servers, and large disk arrays are required for handling such Usenet volumes. Those setups are
called ``carrier-class'' Usenet servers, and will be discussed a bit later on in this HOWTO. Administering such an array of hardware may not be the job of the new
Usenet administrator, for which this HOWTO (and most Linux HOWTO's) are written. Nevertheless, it may be interesting to understand what volumes we are talking
about. Usenet news article volumes have been doubling every fourteen months or so, going by what we hear in comments from carrier class Usenet administrators. In
the beginning of 1997, this volume was 1.2 GBytes of articles a day. Thus, the volumes should have roughly done five doublings, or grown 32 times, by the time we
reach mid-2002, at the time of this writing. This gives us a volume of 38.4 GBytes per day. Assume that this transfer happens using uncompressed NNTP (the norm),
and add 50% extra for the overheads of NNTP, TCP, and IP. This gives you a raw data transfer volume of 57.6 GBytes/day or about 460 Gbits/day. If you have to
transfer such volumes of data in 24 hours (86400 seconds), you'll need raw bandwidth of about 5.3 Mbits per second just to receive all these articles. You'll need more
bandwidth to send out feeds to other neighbouring Usenet servers, and then you'll need bandwidth to allow your readers to access your servers and read and post
articles in retail quantities. Clearly, these volume figures are outside the network bandwidths of most corporate organisations or educational institutions, and therefore
only those who are in the business of offering Usenet news can afford it. At the other end of the scale, it is perfectly feasible for a small office to subscribe to a well-
trimmed subset of Usenet newsgroups, and exclude most of the high-volume newsgroups. Starcom Software, where the authors of this HOWTO work, has worked
with a fairly large subset of 600 newsgroups, which is still a tiny fraction of the 15,000+ newsgroups that the carrier class services offer. Your office or college may not
even need 600 groups. And our company had excluded specific high-volume but low-usefulness newsgroups like the talk, comp.binaries, and alt hierarchies. With
the pruned subset, the total volume of articles per day may amount to barely a hundred MBytes a day or so, and can be easily handled by most small offices and
educational institutions. And in such situations, a single Intel Linux server can deliver excellent performance as a Usenet server. Then there's the internal Usenet
service. By internal here, we mean a private set of Usenet newsgroups, not a private computer network. Every company or university which runs a Usenet news
service creates its own hierarchy of internal newsgroups, whose articles never leave the campus or office, and which therefore do not consume Internet bandwidth.
These newsgroups are often the ones most hotly accessed, and will carry more internally generated traffic than all the ``public'' newsgroups you may subscribe to,
within your organisation. After all, how often does a guy have something to say which is relevant to the world at large, unless he's discussing a globally relevant topic
like ``Unix rules!''? If such internal newsgroups are the focus of your Usenet servers, then you may find that fairly modest hardware and Internet bandwidth will
suffice, depending on the size of your organisation. The new Usenet server administrator has to undertake a sizing exercise to ensure that he does not bite off more
than he, or his network resources, can chew. We hope we have provided sufficient information for him to get started with the right questions.
DARPA
The Defense Advanced Research Projects Agency is a research and development agency of the United
States Department of Defense responsible for the development of emerging technologies for use by the
military.
ARPA research played a central role in launching the Information Revolution. The agency developed
and furthered much of the conceptual basis for the ARPANET—prototypical communications network
launched nearly half a century ago—and invented the digital protocols that gave birth to the Internet.
DARPA also provided many of the essential advances that made possible today’s computers and
communications systems, including seminal technological achievements that support the speech
recognition, touch-screen displays, accelerometers, and wireless capabilities at the core of today’s
smartphones and tablets. DARPA has also long been a leader in the development of artificial
intelligence, machine intelligence and semi-autonomous systems. DARPA’s efforts in this domain have
focused primarily on military operations, including command and control, but the commercial sector has
adopted and expanded upon many of the agency’s results to develop wide-spread applications in
fields as diverse as manufacturing, entertainment and education.
In 1973, the U.S. Defense Advanced Research Projects Agency (DARPA) initiated a research program to
investigate techniques and technologies for interlinking packet networks of various kinds. The objective
was to develop communication protocols which would allow networked computers to communicate
transparently across multiple, linked packet networks. This was called the Internetting project and the
system of networks which emerged from the research was known as the “Internet.” The system of
protocols which was developed over the course of this research effort became known as the TCP/IP
Protocol Suite, after the two initial protocols developed: Transmission Control Protocol (TCP) and Internet
Protocol (IP).
World Wide Web
The primary goal of network security are Confidentiality, Integrity, and Availability. These three
pillars of Network Security are often represented as CIA triangle.
Confidentiality − The function of confidentiality is to protect precious business data from
unauthorized persons. Confidentiality part of network security makes sure that the data is
available only to the intended and authorized persons.
Integrity − This goal means maintaining and assuring the accuracy and consistency of data.
The function of integrity is to make sure that the data is reliable and is not changed by
unauthorized persons.
Availability − The function of availability in Network Security is to make sure that the data,
network resources/services are continuously available to the legitimate users, whenever they
require it.
Network Security Attacks
A network attack is an attempt to gain unauthorized access to an organization’s network, with the objective of
stealing data or perform other malicious activity. There are two main types of network attacks:
Passive: Attackers gain access to a network and can monitor or steal sensitive information, but without
making any change to the data, leaving it intact.
Active: Attackers not only gain unauthorized access but also modify data, either deleting,
encrypting or otherwise harming it.
Some Network Attacks
1. Malware/Ransomware
2. Botnets
3. Computer Viruses and Worms
4. Phishing Attacks
5. DDoS (Distributed Denial of Service)
6. Sql Injection
7. Social Engineering Attacks
8. Man-in-the- Middle Attacks
Goals of Security
• Prevention
• Detection
• Recovery
Network Security Measures
1. Use strong passwords and multi factor authentication
2. Access Control Policies
3. Put up a firewall
4. Use security software- anti-spyware, anti-malware and anti-virus
5. Update programs and systems regularly
6. Monitor for intrusion or IDS/IPS
7. Email security
8. Endpoint security
9. Network segmentation
10. Security information and event management (SIEM)
11. Virtual private network (VPN)
12. Web security
13. Wireless security
14. Raise awareness
15. 15. Log management:
1. Use strong passwords and multi factor authentication:
Strong passwords are vital to good online security. Make your password difficult to guess by:
using a combination of capital and lower-case letters, numbers and symbols
making it between eight and 12 characters long
avoiding the use of personal data
changing it regularly
never using it for multiple accounts
using two factor authentication
8. Endpoint security:
-Endpoint security refers to securing endpoints, or end-user devices like desktops, laptops, and mobile
devices. Endpoints serve as points of access to an enterprise network and create points of entry that can
be exploited by malicious actors.
-Endpoint security software protects these points of entry from risky activity and/or malicious attack.
When companies can ensure endpoint compliance with data security standards, they can maintain
greater control over the growing number and type of access points to the network.
9. Network segmentation
There are many kinds of network traffic, each associated with different security risks. Network
segmentation allows you to grant the right access to the right traffic, while restricting traffic
from suspicious sources.
Sometimes simply pulling together the right information from so many different tools and
resources can be prohibitively difficult — particularly when time is an issue. SIEM tools and
software give responders the data they need to act quickly.
VPN tools are used to authenticate communication between secure networks and an
endpoint device. Remote-access VPNs generally use IPsec or Secure Sockets Layer (SSL)
for authentication, creating an encrypted line to block other parties from eavesdropping.
12. Web security:
Including tools, hardware, policies and more, web security is a blanket term to describe the
network security measures businesses take to ensure safe web use when connected to an
internal network. This helps prevent web-based threats from using browsers as access points
to get into the network.
Wireshark
Metasploit
CVE
Nmap
Acunetix web vulnerability scanner
Google dorking
Assignment Presentation (4 march)