Download as pdf or txt
Download as pdf or txt
You are on page 1of 161

Information and Communication Technology

Lecture:
Presented By: Jaspreet Kaur
About me:
PhD Scholar at CSE Department in IITJ, Faculty at SPUP
#Reseacher#Blogger#Faculty#Consultant
Mail id: kaur.3@iitj.ac.in
OSN: LinkedIn, Facebook, Researchgate, Twitter
What is Information?
What is Data/Information?

 Data are a set of values of qualitative or quantitative variables


about one or more persons or objects.
 Useful data for some environment called as information
 Structured data
Eg: Person- ID, Address, Phone no.
 Semi structured data
Eg: Email, webpages
 Unstructured data
Eg: Audio, video, Image, Documents
Student
ID Branch Subjects Phone No.
A CSE C,C++ 123, 246
B CSE C 345
C EE IoT 456,897,787
D ME SVM 234
E EC Communication 678
Facebook Data
Sources/Usage of Data
Sources/Usage of Data

 Social network sites- facebook, twitter etc.


 University Websites
 Search engines
 Medical Data
 IoT Data
Storation of Data
Storation of Data

 File based storage


 Magnetic Disk
 HDD
 SDD
 Cloud based storage
 Personal Computer
 Mobile Devices
Storation of Data in Digital Format

 Binary
 Hexadecimal
 ASCII Code
 Xml
Data Communication

 Offline/Online Data
 LAN. MAN,WAN
 Network protocols and structure
Information Security

 Why we need to secure our data?


 Confidentiality, Integrity, availability (CIA)
 Where we secure our data?
-Device level security
-Network level security
 How we secure our data?
- Various cryptographic protocols
- Intrusion Detection System
Are we Security Plans correct or not?

 Security policies
 Audits
 Standards
 Laws
 Risk assessments and evaluation method
Comparison of manual and electronic
storage of data
Manual Storage System Electronic Storage System

1. Process limited volume of data 1. Process large volume of data


2. Uses paper to store data 2. Use of mass storage devices in the computer itself
3. The speed and accuracy is less 3. More speed and greater accuracy
4. Cost of processing is high since more 4. Cost of processing less because computer performs
human-oriented repetitive task
5. Occupies more space 5. Little space is sufficient
6. Repetitive tasks reduces efficiency and 6. Efficiency is maintained throughout and won’t feel
human feel bore and tiredness. bore and tiredness.
Types of Computers

1. Digital Computers
2. Analog Computers
3. Hybrid Computers
Digital Computer

Digital computer, any of a class of devices capable of solving


problems by processing information in discrete form. It operates on
data, including magnitudes, letters, and symbols, that are expressed in
binary code—i.e., using only the two digits 0 and 1.

1. Smart phones
2. Desktops
3. Laptops
4. Tablets
 Analog Computers:
Analogue Computers are designed to process the analogue data.
Analogue data is continuous data that changes continuously and
cannot have discrete values such as speed, temperature, pressure and
current.

 Hybrid Computers:
Hybrid Computer has features of both analogue and digital computer.
It is fast like analogue computer and has memory and accuracy like
digital computers. It can process both continuous and discrete data.
Eg: Real time systems
Components of Digital Computer
Data Processing

Data in its raw form is not useful to any organization. Data processing is the
method of collecting raw data and translating it into usable information. It is
usually performed in a step-by-step process by a team of data
scientists and data engineers in an organization. The raw data is collected,
filtered, sorted, processed, analyzed, stored and then presented in a readable
format.

• Manual Data Processing


• Mechanical Data Processing
• Electronic Data Processing
Step 1: Collection
The collection of raw data is the first step of the data processing cycle. The type of raw data
collected has a huge impact on the output produced. Hence, raw data should be gathered
from defined and accurate sources so that the subsequent findings are valid and usable.
Raw data can include monetary figures, website cookies, profit/loss statements of a
company, user behavior, etc.

Step 2: Preparation
Data preparation or data cleaning is the process of sorting and filtering the raw data to
remove unnecessary and inaccurate data. Raw data is checked for errors, duplication,
miscalculations or missing data, and transformed into a suitable form for further analysis and
processing. This is done to ensure that only the highest quality data is fed into the processing
unit.
Step 3: Input
In this step, the raw data is converted into machine readable form and fed into
the processing unit. This can be in the form of data entry through a keyboard,
scanner or any other input source.

Step 4: Data Processing


In this step, the raw data is subjected to various data processing methods using
machine learning and artificial intelligence algorithms to generate a desirable
output. This step may vary slightly from process to process depending on the
source of data being processed (data lakes, online databases, connected
devices, etc.) and the intended use of the output.
Step 5: Output
The data is finally transmitted and displayed to the user in a readable form like
graphs, tables, vector files, audio, video, documents, etc. This output can be
stored and further processed in the next data processing cycle.

Step 6: Storage
The last step of the data processing cycle is storage, where data and metadata
is stored for further use. This allows for quick access and retrieval of information
whenever needed, and also allows it to be used as input in the next data
processing cycle directly.
Data Representation
Number System
The number system is simply a system to represent or express numbers. There are
various types of number systems and the most commonly used ones are
decimal number system, binary number system, octal number system, and
hexadecimal number system.
Decimal Value

 102, 100.001
 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,……….
Binary Values

0,1,10,11,100,101,110,……………………………
Octal Values
Hexadecimal Values
Computer Network
A network is a set of devices (often referred to as nodes) connected by
communication links. A node can be a computer, printer, or any other device
capable of sending and/or receiving data generated by other nodes on the
network.
I. Message: The message is the information (data) to be communicated.
Popular forms of information include text, numbers, pictures, audio, and
video.

2. Sender: The sender is the device that sends the data message. It can be a
computer, workstation, telephone handset, video camera, and so on.
3. Receiver: The receiver is the device that receives the message. It can be a
computer, workstation, telephone handset, television, and so on...

4. Transmission medium: The transmission medium is the physical path by


which a message travels from sender to receiver. It can be wired or
wireless. Some examples of transmission media include twisted-pair wire,
coaxial cable, fiber-optic cable, and radio waves.

5. Protocol: A protocol is a set of rules that govern data communications. It


represents an agreement between the communicating devices. Without
a protocol, two devices may be connected but not communicating, just
as a person speaking French cannot be understood by a person who
speaks only Japanese.
Network Criteria

1. Performance: Performance can be measured in many ways, including


transit time and response time. The performance of a network depends on
a number of factors, including the number of users, the type of
transmission medium, the capabilities of the connected hardware, and
the efficiency of the software.

2. Delivery: The system must deliver data to the correct destination. Data
must be received by the intended device or user and only by that device
or user.

3. Accuracy: The system must deliver the data accurately. Data that have
been altered in transmission and left uncorrected are unusable.
4. Timeliness: The system must deliver data in a timely manner. Data delivered
late are useless.

5. Jitter: Jitter refers to the variation in the packet arrival time. It is the uneven
delay in the delivery of audio or video packets.

6. Reliability: In addition to accuracy of delivery, network reliability is measured


by the frequency of failure, the time it takes a link to recover from a failure.

7. Security: Network security issues include protecting data from unauthorized


access, protecting data from damage and development, and implementing
policies and procedures for recovery from breaches and data losses.
Features of Computer Network
 Communication speed
 Data sharing
 Back up and Roll back is easy
 Security
 Performance
 Scalability
 Reliability
 Software and Hardware Sharing
 Software and Hardware Compatibility
Uses of Computer Networks

 Information and Resource Sharing


 Retrieving Remote Information
 Speedy Interpersonal Communication
 E-Commerce
 Highly Reliable Systems
 Cost–Effective Systems
Network Connections Types

1. Point-to-Point: A point-to-point connection provides a dedicated link between


two devices. The entire capacity of the link is reserved for transmission between those two
devices. Most point-to-point connections use an actual length of wire or cable to
connect the two ends, but other options, such as microwave or satellite links, are also
possible. When you change television channels by infrared remote control, you are
establishing a point-to-point connection between the remote control and the television's
control system.
2. Multipoint: A multipoint connection is one in which more than two specific
devices share a single link. In a multipoint environment, the capacity of the channel
is shared, either spatially or temporally. If several devices can use the link
simultaneously, it is a spatially shared connection. If users must take turns, it is a
timeshared connection.
Computer Network Types
A computer network can be categorized by their size.
A computer network is mainly of four types:

1.LAN(Local Area Network)

2.PAN(Personal Area Network)

3.MAN(Metropolitan Area Network)

4.WAN(Wide Area Network)


LAN(Local Area Network)
•Local Area Network is a group of computers connected to each other in a small area
such as building, office.

•LAN is used for connecting two or more personal computers through a communication
medium either wired or wireless.

•It is less costly as it is built with inexpensive hardware such as hubs, access points,
network adapters, and ethernet cables.

•The data is transferred at an extremely faster rate in Local Area Network.

•Local Area Network provides higher security.


PAN(Personal Area Network)

 Personal Area Network is a network arranged within an individual


person, typically within a range of 10 meters.
 Personal Area Network is used for connecting the computer devices
of personal use is known as Personal Area Network.
 Thomas Zimmerman was the first research scientist to bring the idea
of the Personal Area Network.
 Personal Area Network covers an area of 30 feet.
 Personal computer devices that are used to develop the personal
area network are the laptop, mobile phones, media player and
play stations.
There are two types of Personal Area Network:

 Wired Personal Area Network


 Wireless Personal Area Network

Wireless Personal Area Network: Wireless Personal Area Network is developed by


simply using wireless technologies such as WiFi, Bluetooth. It is a low range network.

Wired Personal Area Network: Wired Personal Area Network is created by using the
USB.
MAN(Metropolitan Area Network)

 A metropolitan area network is a network that covers a larger geographic


area by interconnecting a different LAN to form a larger network.
 Government agencies use MAN to connect to the citizens and private
industries.
 In MAN, various LANs are connected to each other through a telephone
exchange line.
 The most widely used protocols in MAN are RS-232, Frame Relay, ATM, ISDN,
OC-3, ADSL, etc.
 It has a higher range than Local Area Network(LAN).
WAN(Wide Area Network)

 A Wide Area Network is a network that extends over a large


geographical area such as states or countries.
 A Wide Area Network is quite bigger network than the LAN.
 A Wide Area Network is not limited to a single location, but it spans
over a large geographical area through a telephone line, fibre
optic cable or satellite links.
 The internet is one of the biggest WAN in the world.
 A Wide Area Network is widely used in the field of Business,
government, and education.
Computer Network Architecture
Computer Network Architecture is defined as the physical and logical
design of the software, hardware, protocols, and media of the
transmission of data. Simply we can say that how computers are
organized and how tasks are allocated to the computer.

The two types of network architectures are used:

• Peer-To-Peer network
• Client/Server network
Peer-To-Peer network

•Peer-To-Peer network is a network in which all the computers are linked


together with equal privilege and responsibilities for processing the data.

•Peer-To-Peer network is useful for small environments, usually up to 10


computers.

•Peer-To-Peer network has no dedicated server.

•Special permissions are assigned to each computer for sharing the


resources, but this can lead to a problem if the computer with the resource is
down.
Client/Server Network

 Client/Server network is a network model designed for the end users


called clients, to access the resources such as songs, video, etc. from a
central computer known as Server.
 The central controller is known as a server while all other computers in
the network are called clients.
 A server performs all the major operations such as security and network
management.
 A server is responsible for managing all the resources such as files,
directories, printer, etc.
 All the clients communicate with each other through a server. For
example, if client1 wants to send some data to client 2, then it first sends
the request to the server for the permission. The server sends the
response to the client 1 to initiate its communication with the client 2.
Internet
 An internet is an interconnection of millions of computers together globally, forming a network
in which any computer can communicate with any other computer as long as they are both
connected to the network. It is a web of networked devices such as routers, computers,
repeaters, satellites, WiFi, servers etc constantly communicating across data lines and wireless
signals.

 No one person, company, organization or government runs the Internet. It is a globally


distributed network comprising many voluntarily interconnected autonomous networks. It
operates without a central governing body with each constituent network setting and
enforcing its own policies. The internet is more of a concept that an actual tangible entity and it
relies on a physical infrastructure that connects networks to other networks. Anyone can access
the internet using an internet-connected device, such as desktop computer, laptop, mobile
phone or tablet.

 A global computer network providing a variety of information and communication facilities,


consisting of interconnected networks using standardized communication protocols.
Intranet
An intranet is a computer network for sharing information, collaboration tools, operational systems, and other
computing services within an organization, usually to the exclusion of access by outsiders. The term is used in
contrast to public networks, such as the Internet, but uses most of the same technology based on the Internet
Protocol Suite.
Extranet

 An intranet that can be partially accessed by authorized outside users,


enabling businesses to exchange information over the internet in a secure
way.
 Where intranets embrace employees within a company, extranets extend
outwards to offer similar functions to those working closely with the business
but separate from it.
 An extranet is a controlled private network that allows access to partners,
vendors and suppliers or an authorized set of customers – normally to a subset
of the information accessible from an organization's intranet.
Darknet
 A dark net or darknet is an overlay network within the Internet that can only be
accessed with specific software, configurations, or authorization, and often uses a
unique customized communication protocol.
Topologies
Topology defines the structure of the network of how all the components are
interconnected to each other. There are two types of topology: physical and
logical topology.
A logical topology is how devices appear connected to the user. A physical
topology is how they are actually interconnected with wires and cables.
Bus Topology
•The bus topology is designed in such a way that all the stations are connected through a single
cable known as a backbone cable.
•Each node is either connected to the backbone cable by drop cable or directly connected to the
backbone cable.
•When a node wants to send a message over the network, it puts a message over the network. All
the stations available in the network will receive the message whether it has been addressed or
not.
•The bus topology is mainly used in 802.3 (ethernet) and 802.4 standard networks.
•The configuration of a bus topology is quite simpler as compared to other topologies.
•The backbone cable is considered as a "single lane" through which the message is broadcast to
all the stations.
•The most common access method of the bus topologies is CSMA.
Advantages of Bus topology

 Low-cost cable: In bus topology, nodes are directly connected to


the cable without passing through a hub. Therefore, the initial cost
of installation is low.
 Moderate data speeds: Coaxial or twisted pair cables are mainly
used in bus-based networks that support upto 10 Mbps.
 Familiar technology: Bus topology is a familiar technology as the
installation and troubleshooting techniques are well known, and
hardware components are easily available.
 Limited failure: A failure in one node will not have any effect on
other nodes.
Disadvantages of Bus topology

 Extensive cabling: A bus topology is quite simpler, but still it requires


a lot of cabling.
 Difficult troubleshooting: It requires specialized test equipment to
determine the cable faults. If any fault occurs in the cable, then it
would disrupt the communication for all the nodes.
 Signal interference: If two nodes send the messages simultaneously,
then the signals of both the nodes collide with each other.
 Reconfiguration difficult: Adding new devices to the network would
slow down the network.
 Attenuation: Attenuation is a loss of signal leads to communication
issues. Repeaters are used to regenerate the signal.
Ring Topology
 Ring topology is like a bus topology, but with connected ends.
 The node that receives the message from the previous computer will retransmit to the next node.
 The data flows in a loop continuously known as an endless loop.
 The data in a ring topology can flow in a clockwise/anticlockwise direction. It is either unidirectional or
bidirectional.
 The most common access method of the ring topology is token passing.
Advantages of Ring topology

 Network Management: Faulty devices can be removed from the


network without bringing the network down.
 Product availability: Many hardware and software tools for network
operation and monitoring are available.
 Cost: Twisted pair cabling is inexpensive and easily available.
Therefore, the installation cost is very low.
 Reliable: It is a more reliable network because the communication
system is not dependent on the single host computer.
Disadvantages of Ring topology

 Difficult troubleshooting: It requires specialized test equipment to


determine the cable faults. If any fault occurs in the cable, then it
would disrupt the communication for all the nodes.
 Failure: The breakdown in one station leads to the failure of the
overall network.
 Reconfiguration difficult: Adding new devices to the network would
slow down the network.
 Delay: Communication delay is directly proportional to the number
of nodes. Adding new devices increases the communication delay.
Star Topology

• Star topology is an arrangement of the network in which every node is connected


to the central hub, switch or a central computer.
• The central computer is known as a server, and the peripheral devices attached
to the server are known as clients.
• Coaxial cables/RJ-45 cables/wifi are used to connect the computers.
Advantages of Star topology

 Efficient troubleshooting: Troubleshooting is quite efficient in a star topology as compared


to bus topology. In a bus topology, the manager has to inspect the kilometers of cable. In a star
topology, all the stations are connected to the centralized network. Therefore, the network
administrator has to go to the single station to troubleshoot the problem.

 Network control: Complex network control features can be easily implemented in the star
topology. Any changes made in the star topology are automatically accommodated.
 Limited failure: As each station is connected to the central hub with its own cable, therefore
failure in one cable will not affect the entire network.

 Easily expandable: It is easily expandable as new stations can be added to the open ports
on the hub.

 Cost effective: Star topology networks are cost-effective as it uses inexpensive coaxial cable.
 High data speeds: It supports a bandwidth of approx 100Mbps. Ethernet 100BaseT is one of
the most popular Star topology networks.
Disadvantages of Star topology

 A Central point of failure: If the central hub or switch goes down,


then all the connected nodes will not be able to communicate with
each other.
 Cable: Sometimes cable routing becomes difficult when a
significant amount of routing is required.
Tree topology
•Tree topology combines the characteristics of bus topology and star topology.
•A tree topology is a type of structure in which all the computers are connected
with each other in hierarchical fashion.
•The top-most node in tree topology is known as a root node, and all other
nodes are the descendants of the root node.
•There is only one path exists between two nodes for the data transmission.
Thus, it forms a parent-child hierarchy.
Advantages of Tree topology

 Support for broadband transmission: Tree topology is mainly used to provide broadband transmission, i.e.,
signals are sent over long distances without being attenuated.
 Easily expandable: We can add the new device to the existing network. Therefore, we can say that tree
topology is easily expandable.
 Easily manageable: In tree topology, the whole network is divided into segments known as star networks
which can be easily managed and maintained.
 Error detection: Error detection and error correction are very easy in a tree topology.
 Limited failure: The breakdown in one station does not affect the entire network.
 Point-to-point wiring: It has point-to-point wiring for individual segments.
Disadvantages of Tree topology

 Difficult troubleshooting: If any fault occurs in the node, then it becomes


difficult to troubleshoot the problem.
 High cost: Devices required for broadband transmission are very costly.
 Failure: A tree topology mainly relies on main bus cable and failure in main
bus cable will damage the overall network.
 Reconfiguration difficult: If new devices are added, then it becomes difficult
to reconfigure.
Mesh topology
 Mesh technology is an arrangement of the network in which computers are interconnected with each
other through various redundant connections.
 There are multiple paths from one computer to another computer.
 It does not contain the switch, hub or any central computer which acts as a central point of
communication.
 The Internet is an example of the mesh topology.
 Mesh topology is mainly used for WAN implementations where communication failures are a critical
concern.
 Mesh topology is mainly used for wireless networks.
 Mesh topology can be formed by using the formula:
Number of cables = (n*(n-1))/2; Where n is the number of nodes that represents the network.
Mesh topology is divided into two categories:
 Fully connected mesh topology
 Partially connected mesh topology

Full Mesh Topology: In a full mesh topology, each computer is connected to


all the computers available in the network.
Partial Mesh Topology: In a partial mesh topology, not all but certain
computers are connected to those computers with which they communicate
frequently.
Advantages of Mesh topology

 Reliable: The mesh topology networks are very reliable as if any link
breakdown will not affect the communication between connected
computers.
 Fast Communication: Communication is very fast between the
nodes.
 Easier Reconfiguration: Adding new devices would not disrupt the
communication between other devices.
Disadvantages of Mesh topology
 Cost: A mesh topology contains a large number of connected
devices such as a router and more transmission media than other
topologies.
 Management: Mesh topology networks are very large and very
difficult to maintain and manage. If the network is not monitored
carefully, then the communication link failure goes undetected.
 Efficiency: In this topology, redundant connections are high that
reduces the efficiency of the network.
Hybrid Topology
 The combination of various different topologies is known as Hybrid topology.
 A Hybrid topology is a connection between different links and nodes to
transfer the data.
 When two or more different topologies are combined together is termed as
Hybrid topology and if similar topologies are connected with each other will
not result in Hybrid topology. For example, if there exist a ring topology in
one branch of ICICI bank and bus topology in another branch of ICICI
bank, connecting these two topologies will result in Hybrid topology.
Advantages of Hybrid Topology

 Reliable: If a fault occurs in any part of the network will not affect
the functioning of the rest of the network.
 Scalable: Size of the network can be easily expanded by adding
new devices without affecting the functionality of the existing
network.
 Flexible: This topology is very flexible as it can be designed
according to the requirements of the organization.
 Effective: Hybrid topology is very effective as it can be designed in
such a way that the strength of the network is maximized and
weakness of the network is minimized.
Disadvantages of Hybrid topology

 Complex design: The major drawback of the Hybrid topology is the


design of the Hybrid network. It is very difficult to design the
architecture of the Hybrid network.
 Costly Hub: The Hubs used in the Hybrid topology are very expensive
as these hubs are different from usual Hubs used in other topologies.
 Costly infrastructure: The infrastructure cost is very high as a hybrid
network requires a lot of cabling, network devices, etc.
Transmission modes

The way in which data is transmitted from one device to another device is
known as transmission mode. The transmission mode is also known as the
communication mode.

 Simplex mode
 Half-duplex mode
 Full-duplex mode
Simplex mode
•In Simplex mode, the communication is unidirectional, i.e., the data flow in one
direction.
•A device can only send the data but cannot receive it or it can receive the data but
cannot send the data.
•The simplex mode is used in the business field as in sales that do not require any
corresponding reply.
•The radio station is a simplex channel as it transmits the signal to the listeners but
never allows them to transmit back.
•Keyboard and Monitor are the examples of the simplex mode as a keyboard can only
accept the data from the user and monitor can only be used to display the data on the
screen.
Advantage of Simplex mode:
 In simplex mode, the station can utilize the entire bandwidth of the
communication channel, so that more data can be transmitted at
a time.
Disadvantage of Simplex mode:
 Communication is unidirectional, so it has no inter-communication
between devices.
Half-Duplex mode
•In a Half-duplex channel, direction can be reversed, i.e., the station can transmit and receive the data
as well.
•Messages flow in both the directions, but not at the same time.
•The entire bandwidth of the communication channel is utilized in one direction at a time.
•In half-duplex mode, it is possible to perform the error detection, and if any error occurs, then the
receiver requests the sender to retransmit the data.
•A Walkie-talkie is an example of the Half-duplex mode. In Walkie-talkie, one party speaks, and
another party listens. After a pause, the other speaks and first party listens. Speaking simultaneously
will create the distorted sound which cannot be understood.
Advantage of Half-duplex mode:
 In half-duplex mode, both the devices can send and receive the
data and also can utilize the entire bandwidth of the
communication channel during the transmission of data.
Disadvantage of Half-Duplex mode:
 In half-duplex mode, when one device is sending the data, then
another has to wait, this causes the delay in sending the data at the
right time.
Full-duplex mode
 In Full duplex mode, the communication is bi-directional, i.e., the data flow in both the
directions.
 Both the stations can send and receive the message simultaneously.
 Full-duplex mode has two simplex channels. One channel has traffic moving in one direction,
and another channel has traffic flowing in the opposite direction.
 The Full-duplex mode is the fastest mode of communication between devices.
 The most common example of the full-duplex mode is a telephone network. When two
people are communicating with each other by a telephone line, both can talk and listen at
the same time.
Advantage of Full-duplex mode:
 Both the stations can send and receive the data at the same time.
Disadvantage of Full-duplex mode:
 If there is no dedicated path exists between the devices, then the
capacity of the communication channel is divided into two parts.
Computer Network Components
Computer network components are the major parts which are needed to install the software.
Some important network components are NIC, switch, cable, hub, Bridge and router etc.
Depending on the type of network that we need to install, some network components can
also be removed. For example, the wireless network does not require a cable.

 1. NIC: NIC stands for network interface card. NIC is a hardware component used to connect a
computer with another computer onto a network. It can support a transfer rate of 10,100 to 1000
Mb/s. The MAC address or physical address is encoded on the network card chip which is
assigned by the IEEE to identify a network card uniquely.
Wired NIC: The Wired NIC is present inside the motherboard. Cables and connectors are used with
wired NIC to transfer data.
Wireless NIC: The wireless NIC contains the antenna to obtain the connection over the wireless
network. For example, laptop computer contains the wireless NIC.
2.Cables:
Cable is a transmission media used for transmitting a signal. There are
mainly three types of cables used in transmission: Twisted pair cable,
Coaxial cable, Fibre-optic cable.

3. Repeater – A repeater operates at the physical layer. Its job is to


regenerate the signal over the same network before the signal becomes
too weak or corrupted so as to extend the length to which the signal can
be transmitted over the same network. An important point to be noted
about repeaters is that they do not amplify the signal. When the signal
becomes weak, they copy the signal bit by bit and regenerate it at the
original strength. It is a 2 port device.
4. Bridges: A bridge operates at data link layer. A bridge is a repeater, with add on the
functionality of filtering content by reading the MAC addresses of source and
destination. It is also used for interconnecting two LANs working on the same protocol. It
has a single input and single output port, thus making it a 2 port device.
5. Hub:
 A hub is basically a multiport repeater. A Hub is a hardware device that divides the network
connection among multiple devices. When computer requests for some information from a network, it
first sends the request to the Hub through cable. Hub will broadcast this request to the entire network.
All the devices will check whether the request belongs to them or not. If not, the request will be
dropped.
 The process used by the Hub consumes more bandwidth and limits the amount of communication.
Nowadays, the use of hub is obsolete, and it is replaced by more advanced computer network
components such as Switches, Routers.

6. Switch:
A switch is a multiport bridge. A switch is a hardware device that connects multiple devices on a
computer network. A Switch contains more advanced features than Hub. The Switch contains the
updated table that decides where the data is transmitted or not. Switch delivers the message to the
correct destination based on the physical address present in the incoming message. A Switch does not
broadcast the message to the entire network like the Hub. It determines the device to whom the
message is to be transmitted. Therefore, we can say that switch provides a direct connection between
the source and destination. It increases the speed of the network.
7. Router
•A router is a hardware device which is used to connect a LAN with an internet connection. It is used to receive,
analyze and forward the incoming packets to another network.
•A router works in a Layer 3 (Network layer) of the OSI Reference model.
•A router forwards the packet based on the information available in the routing table.
•It determines the best path from the available paths for the transmission of the packet.
OSI Model

•OSI stands for Open System Interconnection is a reference model that describes
how information from a software application in one computer moves through a
physical medium to the software application in another computer.

•OSI consists of seven layers, and each layer performs a particular network
function.

•OSI model was developed by the International Organization for Standardization


(ISO) in 1984, and it is now considered as an architectural model for the inter-
computer communications.

•OSI model divides the whole task into seven smaller and manageable tasks.
Each layer is assigned a particular task.

•Each layer is self-contained, so that task assigned to each layer can be


performed independently.
Characteristics of OSI Model

 The OSI model is divided into two layers: upper layers


and lower layers.
 The upper layer of the OSI model mainly deals with
the application related issues, and they are
implemented only in the software. The application
layer is closest to the end user. Both the end user and
the application layer interact with the software
applications. An upper layer refers to the layer just
above another layer.
 The lower layer of the OSI model deals with the data
transport issues. The data link layer and the physical
layer are implemented in hardware and software.
The physical layer is the lowest layer of the OSI model
and is closest to the physical medium. The physical
layer is mainly responsible for placing the information
on the physical medium.
Functions of the OSI Layers
1. Physical layer: The main functionality of the physical layer is to transmit the individual bits from one
node to another node. It is the lowest layer of the OSI model. It establishes, maintains and
deactivates the physical connection. It specifies the mechanical, electrical and procedural network
interface specifications.

Functions of a Physical layer:


- Line Configuration: It defines the way how two or more devices can be connected physically.
-Data Transmission: It defines the transmission mode whether it is simplex, half-duplex or full-duplex
mode between the two devices on the network.
-Topology: It defines the way how network devices are arranged.
-Signals: It determines the type of the signal used for transmitting the information.

2. Data-Link Layer: This layer is responsible for the error-free transfer of data frames. It defines the format
of the data on the network. It provides a reliable and efficient communication between two or more
devices. It is mainly responsible for the unique identification of each device that resides on a local
network.

Functions of the Data-link layer:


-Framing: The data link layer translates the physical's raw bit stream into packets known as Frames. The
Data link layer adds the header and trailer to the frame. The header which is added to the frame
contains the hardware destination and source address.
-Physical Addressing: The Data link layer adds a header to the frame that contains a destination
address. The frame is transmitted to the destination address mentioned in the header.
-Flow Control: Flow control is the main functionality of the Data-link layer. It is the technique
through which the constant data rate is maintained on both the sides so that no data get
corrupted. It ensures that the transmitting station such as a server with higher processing speed
does not exceed the receiving station, with lower processing speed.
-Error Control: Error control is achieved by adding a calculated value CRC (Cyclic Redundancy
Check) that is placed to the Data link layer's trailer which is added to the message frame before it
is sent to the physical layer. If any error seems to occurr, then the receiver sends the
acknowledgment for the retransmission of the corrupted frames.
-Access Control: When two or more devices are connected to the same communication channel,
then the data link layer protocols are used to determine which device has control over the link at a
given time.
3. Network Layer: It is a layer 3 that manages device addressing, tracks the location of devices on
the network. It determines the best path to move data from source to the destination based on the
network conditions, the priority of service, and other factors. It is responsible for routing and forwarding
the packets. Routers are the layer 3 devices, they are specified in this layer and used to provide the
routing services within an internetwork. The protocols used to route the network traffic are known as
Network layer protocols. Examples of protocols are IP and Ipv6.
Functions of Network Layer:
-Internetworking: An internetworking is the main responsibility of the network layer. It provides
a logical connection between different devices.
-Addressing: A Network layer adds the source and destination address to the header of the
frame. Addressing is used to identify the device on the internet.
-Routing: Routing is the major component of the network layer, and it determines the best
optimal path out of the multiple paths from source to the destination.
-Packetizing: A Network Layer receives the packets from the upper layer and converts them
into packets. This process is known as Packetizing. It is achieved by internet protocol (IP).

 4. Transport Layer: The Transport layer is a Layer 4 ensures that messages are transmitted in
the order in which they are sent and there is no duplication of data. The main responsibility
of the transport layer is to transfer the data completely. It receives the data from the upper
layer and converts them into smaller units known as segments. This layer can be termed as
an end-to-end layer as it provides a point-to-point connection between source and
destination to deliver the data reliably.
 Functions of Transport Layer:
-Service-point addressing: Computers run several programs simultaneously due to this reason, the transmission of
data from source to the destination not only from one computer to another computer but also from one process to
another process. The transport layer adds the header that contains the address known as a service-point address or
port address. The responsibility of the network layer is to transmit the data from one computer to another computer
and the responsibility of the transport layer is to transmit the message to the correct process.
-Segmentation and reassembly: When the transport layer receives the message from the upper layer, it divides the
message into multiple segments, and each segment is assigned with a sequence number that uniquely identifies
each segment. When the message has arrived at the destination, then the transport layer reassembles the message
based on their sequence numbers.
-Connection control: Transport layer provides two services Connection-oriented service and connectionless service. A
connectionless service treats each segment as an individual packet, and they all travel in different routes to reach
the destination. A connection-oriented service makes a connection with the transport layer at the destination
machine before delivering the packets. In connection-oriented service, all the packets travel in the single route.
-Flow control: The transport layer also responsible for flow control but it is performed end-to-end rather than across a
single link.
-Error control: The transport layer is also responsible for Error control. Error control is performed end-to-end rather than
across the single link. The sender transport layer ensures that message reach at the destination without any error.

5. Session Layer: The Session layer is used to establish, maintain and synchronizes the interaction between
communicating devices.
Functions of Session layer:
-Dialog control: Session layer acts as a dialog controller that creates a dialog between two processes or we can say
that it allows the communication between two processes which can be either half-duplex or full-duplex.
-Synchronization: Session layer adds some checkpoints when transmitting the data in a sequence. If some error
occurs in the middle of the transmission of data, then the transmission will take place again from the checkpoint. This
process is known as Synchronization and recovery.
 6. Presentation Layer: A Presentation layer is mainly concerned with the syntax and semantics of
the information exchanged between the two systems. It acts as a data translator for a network.
This layer is a part of the operating system that converts the data from one presentation format
to another format. The Presentation layer is also known as the syntax layer.
Functions of Presentation layer:
-Translation: The processes in two systems exchange the information in the form of character strings,
numbers and so on. Different computers use different encoding methods, the presentation layer
handles the interoperability between the different encoding methods. It converts the data from
sender-dependent format into a common format and changes the common format into receiver-
dependent format at the receiving end.
-Encryption: Encryption is needed to maintain privacy. Encryption is a process of converting the
sender-transmitted information into another form and sends the resulting message over the network.
-Compression: Data compression is a process of compressing the data, i.e., it reduces the number
of bits to be transmitted. Data compression is very important in multimedia such as text, audio,
video.

 7. Application Layer: An application layer serves as a window for users and application
processes to access network service. It handles issues such as network transparency, resource
allocation, etc. An application layer is not an application, but it performs the application layer
functions. This layer provides the network services to the end-users.
Functions of Application layer:
• File transfer, access, and management (FTAM): An application layer allows
a user to access the files in a remote computer, to retrieve the files from a
computer and to manage the files in a remote computer.

• Mail services: An application layer provides the facility for email forwarding
and storage.

•Directory services: An application provides the distributed database sources


and is used to provide that global information about various objects.
TCP/IP model
Protocols

 HTTP: Hypertext Transfer Protocol (HTTP) is an application-layer protocol for


transmitting hypermedia documents, such as HTML. It was designed for
communication between web browsers and web servers. By
default, HTTP uses port 80.
 HTTPS: Hypertext Transfer Protocol Secure is an extension of the Hypertext
Transfer Protocol. It is used for secure communication over a computer
network, and is widely used on the Internet. In HTTPS, the communication
protocol is encrypted using Transport Layer Security or, formerly, Secure
Sockets Layer. HTTPS uses port 443.
 DNS: The Domain Name System (DNS) is
a hierarchical and decentralized naming system for computers, services, or
other resources connected to the Internet or a private network. It translates
more readily memorized domain names to the numerical IP
addresses needed for locating and identifying computer services and
devices with the underlying network protocols. DNS is Port 53.
 TCP: The Transmission Control Protocol (TCP) is a transport protocol that is used on top of
IP to ensure reliable transmission of packets. TCP includes mechanisms to solve many of
the problems that arise from packet-based messaging, such as lost packets, out of order
packets, duplicate packets, and corrupted packets. It is a connection oriented protocol.

 UDP: UDP (User Datagram Protocol) is a communications protocol that is primarily used
for establishing low-latency and loss-tolerating connections between applications on the
internet. It speeds up transmissions by enabling the transfer of data before an agreement
is provided by the receiving party(connection-less).

 IP: IP stands for "Internet Protocol," which is the set of rules governing the format of data
sent via the internet or local network. In essence, IP addresses are the identifier that allows
information to be sent between devices on a network: they contain location information
and make devices accessible for communication.
IP Addressing

 Communication between hosts can happen only if they can identify each other on the network.
In a single collision domain (where every packet sent on the segment by one host is heard by
every other host) hosts can communicate directly via MAC address.
 MAC address is a factory coded 48-bits hardware address which can also uniquely identify a
host. But if a host wants to communicate with a remote host, i.e. not in the same segment or
logically not connected, then some means of addressing is required to identify the remote host
uniquely. A logical address is given to all hosts connected to Internet and this logical address is
called Internet Protocol Address.
 Internet Protocol is one of the major protocols in the TCP/IP protocols suite. This protocol works at
the network layer of the OSI model and at the Internet layer of the TCP/IP model. Thus this
protocol has the responsibility of identifying hosts based upon their logical addresses and to
route data among them over the underlying network.
 IP provides a mechanism to uniquely identify hosts by an IP addressing scheme. IP uses best
effort delivery, i.e. it does not guarantee that packets would be delivered to the destined host,
but it will do its best to reach the destination. Internet Protocol version 4 uses 32-bit logical
address.
Information system
 information system, an integrated set of components for collecting, storing, and processing data and for providing information, knowledge,
and digital products. Business firms and other organizations rely on information systems to carry out and manage their operations, interact with
their customers and suppliers, and compete in the marketplace. Information systems are used to run interorganizational supply chains and
electronic markets. For instance, corporations use information systems to process financial accounts, to manage their human resources, and to
reach their potential customers with online promotions. Many major companies are built entirely around information systems. These
include eBay, a largely auction marketplace; Amazon, an expanding electronic mall and provider of cloud computing services; Alibaba, a
business-to-business e-marketplace; and Google, a search engine company that derives most of its revenue from keyword advertising
on Internet searches. Governments deploy information systems to provide services cost-effectively to citizens. Digital goods—such as electronic
books, video products, and software—and online services, such as gaming and social networking, are delivered with information systems.
Individuals rely on information systems, generally Internet-based, for conducting much of their personal lives: for socializing, study, shopping,
banking, and entertainment.
 As major new technologies for recording and processing information were invented over the millennia, new capabilities appeared, and
people became empowered. The invention of the printing press by Johannes Gutenberg in the mid-15th century and the invention of a
mechanical calculator by Blaise Pascal in the 17th century are but two examples. These inventions led to a profound revolution in the ability to
record, process, disseminate, and reach for information and knowledge. This led, in turn, to even deeper changes in individual lives, business
organization, and human governance.
 The first large-scale mechanical information system was Herman Hollerith’s census tabulator. Invented in time to process the 1890 U.S. census,
Hollerith’s machine represented a major step in automation, as well as an inspiration to develop computerized information systems.
 One of the first computers used for such information processing was the UNIVAC I, installed at the U.S. Bureau of the Census in 1951 for
administrative use and at General Electric in 1954 for commercial use. Beginning in the late 1970s, personal computers brought some of the
advantages of information systems to small businesses and to individuals. Early in the same decade the Internet began its expansion as the
global network of networks. In 1991 the World Wide Web, invented by Tim Berners-Lee as a means to access the interlinked information stored
in the globally dispersed computers connected by the Internet, began operation and became the principal service delivered on the network.
The global penetration of the Internet and the Web has enabled access to information and other resources and facilitated the forming of
relationships among people and organizations on an unprecedented scale. The progress of electronic commerce over the Internet has
resulted in a dramatic growth in digital interpersonal communications (via e-mail and social networks), distribution of products (software, music,
e-books, and movies), and business transactions (buying, selling, and advertising on the Web). With the worldwide spread
of smartphones, tablets, laptops, and other computer-based mobile devices, all of which are connected by wireless communication networks,
information systems have been extended to support mobility as the natural human condition.
 As information systems enabled more diverse human activities, they exerted a profound influence over society. These systems quickened the
pace of daily activities, enabled people to develop and maintain new and often more-rewarding relationships, affected the structure and mix
of organizations, changed the type of products bought, and influenced the nature of work. Information and knowledge became vital
economic resources. Yet, along with new opportunities, the dependence on information systems brought new threats. Intensive
industry innovation and academic research continually develop new opportunities while aiming to contain the threats.
Components of information systems
1. Computer hardware
2. Computer software
3. Telecommunications
4. Databases and data warehouses
5. Human resources and procedures
Types of information systems
Information systems support operations, knowledge work, and management in
organizations. (The overall structure of organizational information systems is shown in
the figure.) Functional information systems that support a specific organizational function,
such as marketing or production, have been supplanted in many cases by cross-
functional systems built to support complete business processes, such as order processing
or employee management. Such systems can be more effective in the development
and delivery of the firm’s products and can be evaluated more closely with respect to
the business outcomes. The information-system categories described here may
be implemented with a great variety of application programs.
 structure of organizational information systemsInformation
systems consist of three layers: operational support, support of
knowledge work, and management support. Operational
support forms the base of an information system and contains
various transaction processing systems for designing, marketing,
producing, and delivering products and services. Support of
knowledge work forms the middle layer; it contains subsystems
for sharing information within an organization. Management
support, forming the top layer, contains subsystems for
managing and evaluating an organization's resources and
goals.
Operational support and enterprise systems
Transaction processing systems support the operations through
which products are designed, marketed, produced, and delivered.
In larger organizations, transaction processing is frequently
accomplished with large integrated systems known as enterprise
systems. In this case, the information systems that support various
functional units—sales and marketing, production, finance, and
human resources—are integrated into an enterprise resource
planning (ERP) system, the principal kind of enterprise system. ERP
systems support the value chain—that is, the entire sequence of
activities or processes through which a firm adds value to its
products. For example, an individual or another business may submit
a custom order over the Web that automatically initiates just-in-time
production to the customer’s specifications through an approach
known as mass customization. This involves sending orders from the
customers to the firm’s warehouses and perhaps to suppliers to
deliver input materials just in time for a batched custom production
run. Financial accounts are updated accordingly, and
delivery logistics and billing are initiated.
 Along with helping to integrate a firm’s own value chain, transaction processing systems can also serve to
integrate the overall supply chain of which the organization is a part. This includes all firms involved in
designing, producing, marketing, and delivering the goods and services—from raw materials to the final
delivery of the product. A supply chain management (SCM) system manages the flow of products, data,
money, and information throughout the entire supply chain, which starts with the suppliers of raw materials,
runs through the intermediate tiers of the processing companies, and ends with the distributors and retailers.
For example, purchasing an item at a major retail store generates more than a cash register receipt: it also
automatically sends a restocking order to the appropriate supplier, which in turn may call for orders to the
supplier’s suppliers. With an SCM system, suppliers can also access a retailer’s inventory database over the
Web to schedule efficient and timely deliveries in appropriate quantities.
 The third type of enterprise system, customer relationship management (CRM), supports dealing with the
company’s customers in marketing, sales, service, and new product development. A CRM system gives a
business a unified view of each customer and its dealings with that customer, enabling a consistent
and proactive relationship. In cocreation initiatives, the customers may be involved in the development of
the company’s new products.
 Many transaction processing systems support electronic commerce over the Internet. Among these are
systems for online shopping, banking, and securities trading. Other systems deliver information, educational
services, and entertainment on demand. Yet other systems serve to support the search for products with
desired attributes (for example, keyword search on search engines), price discovery (via an auction, for
example), and delivery of digital products (such as software, music, movies, or greeting cards). Social
network sites, such as Facebook and LinkedIn, are a powerful tool for supporting customer communities and
individuals as they articulate opinions, evolve new ideas, and are exposed to promotional messages. A
growing array of specialized services and information-based products are offered by various organizations on
the Web, as an infrastructure for electronic commerce has emerged on a global scale.
 Transaction processing systems accumulate the data in databases and data warehouses that are necessary
for the higher-level information systems. Enterprise systems also provide software modules needed to perform
many of these higher-level functions.
Support of knowledge work
A large proportion of work in an information society involves manipulating abstract information and knowledge (understood in this context as an
organized and comprehensive structure of facts, relationships, theories, and insights) rather than directly processing, manufacturing, or
delivering tangible materials. Such work is called knowledge work. Three general categories of information systems support such knowledge work:
professional support systems, collaboration systems, and knowledge management systems.
Professional support systems
Professional support systems offer the facilities needed to perform tasks specific to a given profession. For example, automotive engineers
use computer-aided engineering (CAE) software together with virtual reality systems to design and test new models as electronic prototypes for
fuel efficiency, handling, and passenger protection before producing physical prototypes, and later they use CAE in the design and analysis of
physical tests. Biochemists use specialized three-dimensional modeling software to visualize the molecular structure and probable effect of new
drugs before investing in lengthy clinical tests. Investment bankers often employ financial software to calculate the expected rewards and
potential risks of various investment strategies. Indeed, specialized support systems are now available for most professions.
Collaboration systems
The main objectives of collaboration systems are to facilitate communication and teamwork among the members of an organization and across
organizations. One type of collaboration system, known as a workflow system, is used to route relevant documents automatically to all
appropriate individuals for their contributions.
Development, pricing, and approval of a commercial insurance policy is a process that can benefit from such a system. Another category of
collaboration systems allows different individuals to work simultaneously on a shared project. Known as groupware, such systems accomplish this
by allowing controlled shared access, often over an intranet, to the work objects, such as business proposals, new designs, or digital products in
progress. The collaborators can be located anywhere in the world, and, in some multinational companies, work on a project continues 24 hours a
day.
Other types of collaboration systems include enhanced e-mail and videoconferencing systems, sometimes with telepresence using avatars of the
participants. Yet another type of collaboration software, known as wiki, enables multiple participants to add and edit content. (Some online
encyclopaedias are produced on such platforms.) Collaboration systems can also be established on social network platforms or virtual life
systems. In the open innovation initiative, members of the public, as well as existing and potential customers, can be drawn in, if desired, to
enable the cocreation of new products or projection of future outcomes.
Knowledge management systems
Knowledge management systems provide a means to assemble and act on the knowledge accumulated throughout an organization. Such
knowledge may include the texts and images contained in patents, design methods, best practices, competitor intelligence, and similar sources,
with the elaboration and commentary included. Placing the organization’s documents and communications in an indexed and cross-referenced
form enables rich search capabilities. Numerous application programs, such as Microsoft’s SharePoint, exist to facilitate the implementation of
such systems. Organizational knowledge is often tacit, rather than explicit, so these systems must also direct users to members of the organization
with special expertise.
 Management support
A large category of information systems comprises those designed to support the management of an organization. These systems rely on the data obtained by transaction
processing systems, as well as on data and information acquired outside the organization (on the Web, for example) and provided by business partners, suppliers, and
customers.
Management reporting systems
Information systems support all levels of management, from those in charge of short-term schedules and budgets for small work groups to those concerned with long-term
plans and budgets for the entire organization. Management reporting systems provide routine, detailed, and voluminous information reports specific to each manager’s areas
of responsibility. These systems are typically used by first-level supervisors. Generally, such reports focus on past and present activities, rather than projecting future
performance. To prevent information overload, reports may be automatically sent only under exceptional circumstances or at the specific request of a manager.
Decision support systems and business intelligence
All information systems support decision making, however indirectly, but decision support systems are expressly designed for this purpose. As these systems are increasingly
being developed to analyze massive collections of data (known as big data), they are becoming known as business intelligence, or business analytics, applications. The two
principal varieties of decision support systems are model-driven and data-driven. In a model-driven decision support system, a preprogrammed model is applied to a relatively
limited data set, such as a sales database for the present quarter. During a typical session, an analyst or sales manager will conduct a dialog with this decision support system
by specifying a number of what-if scenarios. For example, in order to establish a selling price for a new product, the sales manager may use a marketing decision support
system. It contains a model relating various factors—the price of the product, the cost of goods, and the promotion expense in various media—to the projected sales volume
over the first five years on the market. By supplying different product prices to the model, the manager can compare predicted results and select the most profitable selling
price. The primary objective of data-driven business intelligence systems is to analyze large pools of data, accumulated over long periods of time in data warehouses, in a
process known as data mining. Data mining aims to discover significant patterns, such as sequences (buying a new house, followed by a new dinner table), clusters, and
correlations (large families and van sales), with which decisions can be made. Predictive analytics attempts to forecast future outcomes based on the discovered trends.
Data-driven decision support systems include a variety of statistical models and may rely on various artificial intelligence techniques, such as expert systems, neural networks,
and machine learning. In addition to mining numeric data, text mining is conducted on large aggregates of unstructured data, such as the contents of social media that
include social networks, wikis, blogs, and microblogs. As used in electronic commerce, for example, text mining helps in finding buying trends, targeting advertisements, and
detecting fraud. An important variety of decision support systems enables a group of decision makers to work together without necessarily being in the same place at the
same time. These group decision systems include software tools for brainstorming and reaching consensus.
Another category, geographic information systems, can help analyze and display data by using digitized maps. Digital mapping of various regions is a continuing activity of
numerous business firms. Such data visualization supports rapid decision making. By looking at a geographic distribution of mortgage loans, for example, one can easily
establish a pattern of discrimination.
Executive information systems
Executive information systems make a variety of critical information readily available in a highly summarized and convenient form, typically via a graphical digital dashboard.
Senior managers characteristically employ many informal sources of information, however, so that formal, computerized information systems are only of partial assistance.
Nevertheless, this assistance is important for the chief executive officer, senior and executive vice presidents, and the board of directors to monitor the performance of the
company, assess the business environment, and develop strategic directions for the future. In particular, these executives need to compare their organization’s performance
with that of its competitors and investigate general economic trends in regions or countries. Often individualized and relying on multiple media formats, executive information
systems give their users an opportunity to “drill down” from summary information to increasingly focused details.
Raw Data generation

 Raw data is unprocessed computer data. This information may be stored in a


file, or may just be a collection of numbers and characters stored on
somewhere in the computer's hard disk. For example, information entered into a
database is often called raw data. The data can either be entered by a user or
generated by the computer itself. Because it has not been processed by the
computer in any way, it is considered to be "raw data." To continue the culinary
analogy, data that has been processed by the computer is sometimes referred
to as "cooked data.“
 Therefore, the main difference between data and raw data is that raw data is a
jumbled mixture of different information. On the other hand, data or processed
data has already extracted relevant and valuable information from raw data.
 Machine learning, excel, data mining etc.
 Raw data typically refers to tables of data where each row contains an
observation and each column represents a variable that describes some
property of each observation. Data in this format is sometimes referred to as tidy
data, flat data, primary data, atomic data, and unit record data.
Impact of ICT
 ICT has contributed a lot to change our everyday life such as letter to e-mail, market shopping to on-line
shopping, classroom learning to e-learning, etc. This paper present's the effects of ICT as Home and Domestic
Activities, Social Networking, Education, Health, Commerce, Banking, and Employment.
 Daily routine Management
 Social Relationship
 Information Sharing
 Communication
 Usage of Free time
 Children’s Education
 Self Employment
 Paperless Environment
 Developing Health Literacy
 Reduced face-to-face interaction
 Social Disconnect
 Reduced physical activity/Health Problems
Applications of ICT

 Seven major applications of ICT Tools, such as e-group, e-mail, fax,


Internet, Intranet, Mobile Phone, Video Conference.
 ICT is used in most of the fields such as E-Commerce, E-governance,
Banking, Agriculture, Education, Medicine, Defense, Transport, etc
Evolution of Internet
 The Internet started in the 1960s as a way for government researchers to share information.
Computers in the '60s were large and immobile and in order to make use of information
stored in any one computer, one had to either travel to the site of the computer or have
magnetic computer tapes sent through the conventional postal system.
 Another catalyst in the formation of the Internet was the heating up of the Cold War. The
Soviet Union's launch of the Sputnik satellite spurred the U.S. Defense Department to
consider ways information could still be disseminated even after a nuclear attack. This
eventually led to the formation of the ARPANET (Advanced Research Projects Agency
Network), the network that ultimately evolved into what we now know as the Internet.
ARPANET was a great success but membership was limited to certain academic and
research organizations who had contracts with the Defense Department. In response to
this, other networks were created to provide information sharing.
 January 1, 1983 is considered the official birthday of the Internet. Prior to this, the various
computer networks did not have a standard way to communicate with each other. A new
communications protocol was established called Transfer Control Protocol/Internetwork
Protocol (TCP/IP). This allowed different kinds of computers on different networks to "talk" to
each other. ARPANET and the Defense Data Network officially changed to the TCP/IP
standard on January 1, 1983, hence the birth of the Internet. All networks could now be
connected by a universal language.
 In 1973, the U.S. Defense Advanced Research Projects Agency (DARPA) initiated a research program to investigate techniques and technologies for interlinking packet
networks of various kinds. The objective was to develop communication protocols which would allow networked computers to communicate transparently across multiple,
linked packet networks. This was called the Internetting project and the system of networks which emerged from the research was known as the “Internet.” The system of
protocols which was developed over the course of this research effort became known as the TCP/IP Protocol Suite, after the two initial protocols developed: Transmission
Control Protocol (TCP) and Internet Protocol (IP).
 In 1986, the U.S. National Science Foundation (NSF) initiated the development of the NSFNET which, today, provides a major backbone communication service for the
Internet. With its 45 megabit per second facilities, the NSFNET carries on the order of 12 billion packets per month between the networks it links. The National Aeronautics and
Space Administration (NASA) and the U.S. Department of Energy contributed additional backbone facilities in the form of the NSINET and ESNET respectively. In Europe,
major international backbones such as NORDUNET and others provide connectivity to over one hundred thousand computers on a large number of networks. Commercial
network providers in the U.S. and Europe are beginning to offer Internet backbone and access support on a competitive basis to any interested parties.
 “Regional” support for the Internet is provided by various consortium networks and “local” support is provided through each of the research and educational institutions.
Within the United States, much of this support has come from the federal and state governments, but a considerable contribution has been made by industry. In Europe and
elsewhere, support arises from cooperative international efforts and through national research organizations. During the course of its evolution, particularly after 1989, the
Internet system began to integrate support for other protocol suites into its basic networking fabric. The present emphasis in the system is on multiprotocol interworking, and
in particular, with the integration of the Open Systems Interconnection (OSI) protocols into the architecture.
 Both public domain and commercial implementations of the roughly 100 protocols of TCP/IP protocol suite became available in the 1980’s. During the early 1990’s, OSI
protocol implementations also became available and, by the end of 1991, the Internet has grown to include some 5,000 networks in over three dozen countries, serving ove
700,000 host computers used by over 4,000,000 people.
 A great deal of support for the Internet community has come from the U.S. Federal Government, since the Internet was originally part of a federally-funded research
program and, subsequently, has become a major part of the U.S. research infrastructure. During the late 1980’s, however, the population of Internet users and network
constituents expanded internationally and began to include commercial facilities. Indeed, the bulk of the system today is made up of private networking facilities in
educational and research institutions, businesses and in government organizations across the globe.
 The Coordinating Committee for Intercontinental Networks (CCIRN), which was organized by the U.S. Federal Networking Council (FNC) and the European Reseaux
Associees pour la Recherche Europeenne (RARE), plays an important role in the coordination of plans for government- sponsored research networking. CCIRN efforts have
been a stimulus for the support of international cooperation in the Internet environment.
Internet Technical Evolution
 Over its fifteen year history, the Internet has functioned as a collaboration among cooperating parties. Certain key functions have been critical for its operation, not the lea
of which is the specification of the protocols by which the components of the system operate. These were originally developed in the DARPA research program mentioned
above, but in the last five or six years, this work has been undertaken on a wider basis with support from Government agencies in many countries, industry and the academ
community. The Internet Activities Board (IAB) was created in 1983 to guide the evolution of the TCP/IP Protocol Suite and to provide research advice to the Internet
community.
 During the course of its existence, the IAB has reorganized several times. It now has two primary components: the Internet Engineering Task Force and the Internet Research
Task Force. The former has primary responsibility for further evolution of the TCP/IP protocol suite, its standardization with the concurrence of the IAB, and the integration of
other protocols into Internet operation (e.g. the Open Systems Interconnection protocols). The Internet Research Task Force continues to organize and explore advanced
concepts in networking under the guidance of the Internet Activities Board and with support from various government agencies.
 A secretariat has been created to manage the day-to-day function of the Internet Activities Board and Internet Engineering Task Force. IETF
meets three times a year in plenary and its approximately 50 working groups convene at intermediate times by electronic mail,
teleconferencing and at face-to-face meetings. The IAB meets quarterly face-to-face or by videoconference and at intervening times by
telephone, electronic mail and computer-mediated conferences.
 Two other functions are critical to IAB operation: publication of documents describing the Internet and the assignment and recording of
various identifiers needed for protocol operation. Throughout the development of the Internet, its protocols and other aspects of its operation
have been documented first in a series of documents called Internet Experiment Notes and, later, in a series of documents called Requests
for Comment (RFCs). The latter were used initially to document the protocols of the first packet switching network developed by DARPA, the
ARPANET, beginning in 1969, and have become the principal archive of information about the Internet. At present, the publication function is
provided by an RFC editor.
 The recording of identifiers is provided by the Internet Assigned Numbers Authority (IANA) who has delegated one part of this responsibility to
an Internet Registry which acts as a central repository for Internet information and which provides central allocation of network and
autonomous system identifiers, in some cases to subsidiary registries located in various countries. The Internet Registry (IR) also provides central
maintenance of the Domain Name System (DNS) root database which points to subsidiary distributed DNS servers replicated throughout the
Internet. The DNS distributed database is used, inter alia, to associate host and network names with their Internet addresses and is critical to
the operation of the higher level TCP/IP protocols including electronic mail.
 There are a number of Network Information Centers (NICs) located throughout the Internet to serve its users with documentation, guidance,
advice and assistance. As the Internet continues to grow internationally, the need for high quality NIC functions increases. Although the initial
community of users of the Internet were drawn from the ranks of computer science and engineering, its users now comprise a wide range of
disciplines in the sciences, arts, letters, business, military and government administration.
Related Networks
 In 1980-81, two other networking projects, BITNET and CSNET, were initiated. BITNET adopted the IBM RSCS protocol suite and featured direct
leased line connections between participating sites. Most of the original BITNET connections linked IBM mainframes in university data centers.
This rapidly changed as protocol implementations became available for other machines. From the beginning, BITNET has been multi-
disciplinary in nature with users in all academic areas. It has also provided a number of unique services to its users (e.g., LISTSERV). Today,
BITNET and its parallel networks in other parts of the world (e.g., EARN in Europe) have several thousand participating sites. In recent years,
BITNET has established a backbone which uses the TCP/IP protocols with RSCS-based applications running above TCP.
 CSNET was initially funded by the National Science Foundation (NSF) to provide networking for university, industry and government computer
science research groups. CSNET used the Phonenet MMDF protocol for telephone-based electronic mail relaying and, in addition, pioneered
the first use of TCP/IP over X.25 using commercial public data networks. The CSNET name server provided an early example of a white pages
directory service and this software is still in use at numerous sites. At its peak, CSNET had approximately 200 participating sites and
international connections to approximately fifteen countries.
 In 1987, BITNET and CSNET merged to form the Corporation for Research and Educational Networking (CREN). In the Fall of 1991, CSNET
service was discontinued having fulfilled its important early role in the provision of academic networking service. A key feature of CREN is that
its operational costs are fully met through dues paid by its member organizations.
 1970 ARPANET - 15 nodes
 1972 first email
 1982 TCP/IP becomes internet standard

 Transmission Control Protocol/Internet Protocol


 1984 ARPANET - 1,000 nodes
 1986 NSF-Net backbone on ARPANET
 1987 ARPANET - 10,000 nodes
 1988 - businesses begin to connect to system for research purposes
 1989 ARPANET - 100,000 nodes
 1989 link email between CompuServe and ARPANET
 1990 ARPANET becomes the Internet
 1969 started in Cleveland with single computer
 1979 provided first email
 1980 started national service
 Mid-1980s largest online service
 1995 3 Million users
 1997 purchased by AOL
 1986 pilot in Atlanta, Hartford, San Francisco
 1988 national service launched
 1994 1st to offer WWW access
 1999 Prodigy Classic discontinued (209,000 members)
 1991 AOL for DOS
 1993 AOL for Windows
 1997 bought CompuServe
 1999 10 Million users
 Estimated to have distributed over 1 Billion discs of over 1,000 different disk/CD styles
Arpanet

 The Advanced Research Projects Agency Network (ARPANET) was the first wide-area packet-switched
network with distributed control and one of the first networks to implement the TCP/IP protocol suite. Both
technologies became the technical foundation of the Internet. The ARPANET was established by
the Advanced Research Projects Agency (ARPA) of the United States Department of Defense.
 It was first used in 1969 and finally decommissioned in 1989. ARPANET's main use was for academic and
research purposes.
 Many of the protocols used by computer networks today were developed for ARPANET, and it is considered
the forerunner of the modern internet.
 Developments leading to ARPANET
 ARPANET and the subsequent computer networks leading to the internet were not the product of a single
individual or organization, nor were they formed at one time. Instead, the ideas and initial research work of
many people over years of time was used to form the basis of ARPANET and to build it to become the
forerunner of the internet.
 In the 1960s, computers were large mainframe systems. They were very expensive and were only owned by
large companies, universities and governments. Users would sit at dedicated terminals, such
as teletype machines, and run programs on the connected mainframe. Connections between computers
was done over dedicated links. These systems were highly centralized and fault-prone.
 This was during the height of the Cold War. The U.S. military was interested in creating
computer networks that could continue to function after having portions removed,
such as in the case of a nuclear strike. Similarly, universities were looking to develop a
network that could be fault-tolerant over unreliable connections and could be used
to share data and computing resources between users at different locations.
 In the early 1960s, Paul Baran, working for the U.S. think tank Rand Corporation,
developed the concept of distributed adaptive message block switching. This would
enable small groups of data to be sent along differing paths to the destination. This
idea eventually became packet communication that underlies almost all data
communication today. At that time, though, it was not implemented.
 Joseph C.R. Licklider became the director of ARPA's Information Processing
Techniques Office (IPTO) in 1962. He was a major proponent of human-computer
interaction and using computers to help people make better decisions. His influence
lead ARPA to develop its network and other innovations, such as graphical user
interfaces. In 1966, Robert (Bob) Taylor became the director of IPTO. He credits the
idea of ARPANET to the fact that he had three different computer terminals
connected to three mainframe computers in his office that he would need to move
between. This led to the obvious question: Why can't one terminal be used for any
computer?
History of ARPANET
 Development of ARPANET began in 1966. Several standards were developed. Network Control Program (NCP) would handle communication
between hosts and could support the first commands, Telnet and File Transfer Protocol (FTP). It would use packet-switching technology to
communicate. Interface Message Processor was developed to pass messages between hosts. This can be considered the first
packet gateway or router. Hardware modems were designed and sent out to the participating organizations.
 The first message sent over ARPANET happened on Oct. 29, 1969. Charley Kline, who was a student at the University of California Los Angeles
(UCLA), tried to log in to the mainframe at the Stanford Research Institute (SRI). He successfully typed in the characters L and O, but the computer
crashed when he typed the G of the command LOGIN. They were able to overcome the initial crash, however, and had a successful connection
that same day.
 The first permanent connection between UCLA and SRI was put into place on Nov. 21, 1969. Two more universities joined ARPANET as founding
members on Dec. 5, 1969. These were the University of California, Santa Barbara and University of Utah School of Computing.
 ARPANET grew rapidly in the early 1970s. Many universities and government computers joined the network during this time. In 1975, ARPANET was
declared operational and was used to develop further communications technology. In time, several computers in other countries were also
added using satellite links.
 Many packet-based networks quickly came into operation after ARPANET became popular. These various networks could not communicate with
one another due to the requirements of standardized equipment in the existing networks. Therefore, TCP/IP was developed as a protocol to
enable communication between different networks. It was first put into operation in 1977.
 TCP/IP enabled an interconnected network of networks and is the foundational technology of the internet. On Jan. 1, 1983, TCP/IP replaced NCP
as the underlying packet-switching technology of ARPANET.
 Also, in 1983, ARPANET was divided into two networks between military and civilian use. The word internet was first used to describe the
combination of these two networks.
 The importance of ARPANET diminished as other networks became more dominant in the mid-1980s. The National Science Foundation Network
replaced ARPANET as the backbone of the internet in 1986. Commercial and other network providers also began operating during this time.
 ARPANET was shut down in 1989. It was finally decommissioned in 1990.
Legacy of ARPANET
 ARPANET stands as a major changing point in the development of computer technology. Many underlying internet technologies were first
developed on or for ARPANET. Telnet and FTP protocols were some of the first used on ARPANET, and they are still in use today. TCP/IP was
developed on it. The first network email was sent in 1971 over ARPANET. It also hosted what is considered the first marketing spam email in 1978.
 ARPANET also led to many other networking firsts. List servers, or listservs, became early social networks. Early voice communication protocols
were developed on it. Password protection and data encryption were developed for use over ARPANET.
USENET
1.1. Discussion groups:
• The Usenet is a huge worldwide collection of discussion groups. Each discussion group has a
name, e.g. comp.os.linux.announce, and a collection of messages. These messages, usually called articles, are posted by
readers like you and me who have access to Usenet servers, and are then stored on the Usenet servers.
• This ability to both read and write into a Usenet newsgroup makes the Usenet very different from the bulk of what people
today call ``the Internet.'' The Internet has become a colloquial term to refer to the World Wide Web, and the Web is
(largely) read-only. There are online discussion groups with Web interfaces, and there are mailing lists, but Usenet is
probably more convenient than either of these for most large discussion communities. This is because the articles get
replicated to your local Usenet server, thus allowing you to read and post articles without accessing the global Internet,
something which is of great value for those with slow Internet links. Usenet articles also conserve bandwidth because they
do not come and sit in each member's mailbox, unlike email based mailing lists. This way, twenty members of a mailing list
in one office will have twenty copies of each message copied to their mailboxes. However, with a Usenet discussion group
and a local Usenet server, there's just one copy of each article, and it does not fill up anyone's mailbox.
• Another nice feature of having your own local Usenet server is that articles stay on the server even after you've read them.
You can't accidentally delete a Usenet articles the way you can delete a message from your mailbox. This way, a Usenet
server is an excellent way to archive articles of a group discussion on a local server without placing the onus of archiving
on any group member. This makes local Usenet servers very valuable as archives of internal discussion messages within
corporate Intranets, provided the article expiry configuration of the Usenet server software has been set up for sufficiently
long expiry periods.
1.2. How it works, loosely speaking:
 Usenet news works by the reader first firing up a Usenet news program, which in today's GUI world will highly likely be something like
Netscape Messenger or Microsoft's Outlook Express. There are a lot of proven, well-designed character-based Usenet news readers,
but a proper review of the user agent software is outside the scope of this HOWTO, so we will just assume that you are using whatever
software you like. The reader then selects a Usenet newsgroup from the hundreds or thousands of newsgroups which are hosted by
her local server, and accesses all unread articles. These articles are displayed to her. She can then decide to respond to some of
them.
 When the reader writes an article, either in response to an existing one or as a start of a brand-new thread of discussion, her
software posts this article to the Usenet server. The article contains a list of newsgroups into which it is to be posted. Once it is
accepted by the server, it becomes available for other users to read and respond to. The article is automatically expired or deleted
by the server from its internal archives based on expiry policies set in its software; the author of the article usually can do little or
nothing to control the expiry of her articles.
 A Usenet server rarely works on its own. It forms a part of a collection of servers, which automatically exchange articles with each
other. The flow of articles from one server to another is called a newsfeed. In a simplistic case, one can imagine a worldwide network
of servers, all configured to replicate articles with each other, busily passing along copies across the network as soon as one of them
receives a new articles posted by a human reader. This replication is done by powerful and fault-tolerant processes, and gives the
Usenet network its power. Your local Usenet server literally has a copy of all current articles in all relevant newsgroups.
1.3. About sizes, volumes, and so on

Any would-be Usenet server administrator or creator must read the "Periodic Posting about the basic steps involved in configuring a machine to store Usenet
news," also known as the Site Setup FAQ, available from ftp://rtfm.mit.edu/pub/usenet/news.answers/usenet/site-
setup or ftp://ftp.uu.net/usenet/news.answers/news/site-setup.Z. It was last updated in 1997, but trends haven't changed much since then, though absolute
volume figures have. If you want your Usenet server to be a repository for all articles in all newsgroups, you will probably not be reading this HOWTO, or even if you
do, you will rapidly realise that anyone who needs to read this HOWTO may not be ready to set up such a server. This is because the volumes of articles on the Usenet
have reached a point where very specialised networks, very high end servers, and large disk arrays are required for handling such Usenet volumes. Those setups are
called ``carrier-class'' Usenet servers, and will be discussed a bit later on in this HOWTO. Administering such an array of hardware may not be the job of the new
Usenet administrator, for which this HOWTO (and most Linux HOWTO's) are written. Nevertheless, it may be interesting to understand what volumes we are talking
about. Usenet news article volumes have been doubling every fourteen months or so, going by what we hear in comments from carrier class Usenet administrators. In
the beginning of 1997, this volume was 1.2 GBytes of articles a day. Thus, the volumes should have roughly done five doublings, or grown 32 times, by the time we
reach mid-2002, at the time of this writing. This gives us a volume of 38.4 GBytes per day. Assume that this transfer happens using uncompressed NNTP (the norm),
and add 50% extra for the overheads of NNTP, TCP, and IP. This gives you a raw data transfer volume of 57.6 GBytes/day or about 460 Gbits/day. If you have to
transfer such volumes of data in 24 hours (86400 seconds), you'll need raw bandwidth of about 5.3 Mbits per second just to receive all these articles. You'll need more
bandwidth to send out feeds to other neighbouring Usenet servers, and then you'll need bandwidth to allow your readers to access your servers and read and post
articles in retail quantities. Clearly, these volume figures are outside the network bandwidths of most corporate organisations or educational institutions, and therefore
only those who are in the business of offering Usenet news can afford it. At the other end of the scale, it is perfectly feasible for a small office to subscribe to a well-
trimmed subset of Usenet newsgroups, and exclude most of the high-volume newsgroups. Starcom Software, where the authors of this HOWTO work, has worked
with a fairly large subset of 600 newsgroups, which is still a tiny fraction of the 15,000+ newsgroups that the carrier class services offer. Your office or college may not
even need 600 groups. And our company had excluded specific high-volume but low-usefulness newsgroups like the talk, comp.binaries, and alt hierarchies. With
the pruned subset, the total volume of articles per day may amount to barely a hundred MBytes a day or so, and can be easily handled by most small offices and
educational institutions. And in such situations, a single Intel Linux server can deliver excellent performance as a Usenet server. Then there's the internal Usenet
service. By internal here, we mean a private set of Usenet newsgroups, not a private computer network. Every company or university which runs a Usenet news
service creates its own hierarchy of internal newsgroups, whose articles never leave the campus or office, and which therefore do not consume Internet bandwidth.
These newsgroups are often the ones most hotly accessed, and will carry more internally generated traffic than all the ``public'' newsgroups you may subscribe to,
within your organisation. After all, how often does a guy have something to say which is relevant to the world at large, unless he's discussing a globally relevant topic
like ``Unix rules!''? If such internal newsgroups are the focus of your Usenet servers, then you may find that fairly modest hardware and Internet bandwidth will
suffice, depending on the size of your organisation. The new Usenet server administrator has to undertake a sizing exercise to ensure that he does not bite off more
than he, or his network resources, can chew. We hope we have provided sufficient information for him to get started with the right questions.
DARPA

 The Defense Advanced Research Projects Agency is a research and development agency of the United
States Department of Defense responsible for the development of emerging technologies for use by the
military.
 ARPA research played a central role in launching the Information Revolution. The agency developed
and furthered much of the conceptual basis for the ARPANET—prototypical communications network
launched nearly half a century ago—and invented the digital protocols that gave birth to the Internet.
DARPA also provided many of the essential advances that made possible today’s computers and
communications systems, including seminal technological achievements that support the speech
recognition, touch-screen displays, accelerometers, and wireless capabilities at the core of today’s
smartphones and tablets. DARPA has also long been a leader in the development of artificial
intelligence, machine intelligence and semi-autonomous systems. DARPA’s efforts in this domain have
focused primarily on military operations, including command and control, but the commercial sector has
adopted and expanded upon many of the agency’s results to develop wide-spread applications in
fields as diverse as manufacturing, entertainment and education.
 In 1973, the U.S. Defense Advanced Research Projects Agency (DARPA) initiated a research program to
investigate techniques and technologies for interlinking packet networks of various kinds. The objective
was to develop communication protocols which would allow networked computers to communicate
transparently across multiple, linked packet networks. This was called the Internetting project and the
system of networks which emerged from the research was known as the “Internet.” The system of
protocols which was developed over the course of this research effort became known as the TCP/IP
Protocol Suite, after the two initial protocols developed: Transmission Control Protocol (TCP) and Internet
Protocol (IP).
World Wide Web

 The WorldWideWeb (W3) is a wide-area hypermedia information retrieval


initiative aiming to give universal access to a large universe of documents.
 World Wide Web (WWW), byname the Web, the leading information
retrieval service of the Internet (the worldwide computer network). The Web
gives users access to a vast array of documents that are connected to each
other by means of hypertext or hypermedia links—i.e., hyperlinks, electronic
connections that link related pieces of information in order to allow a user easy
access to them. Hypertext allows the user to select a word or phrase from text
and thereby access other documents that contain additional information
pertaining to that word or phrase. Hypermedia documents feature links to
images, sounds, animations, and movies. The Web operates within the Internet’s
basic client-server format; servers are computer programs that store and
transmit documents to other computers on the network when asked to, while
clients are programs that request documents from a server as the user asks for
them. Browser software allows users to view the retrieved documents.
 A hypertext document with its corresponding text and hyperlinks is written
in HyperText Markup Language (HTML) and is assigned an online address called
a Uniform Resource Locator (URL).
 The development of the World Wide Web was begun in 1989 by Tim Berners-Lee and his colleagues
at CERN, an international scientific organization based in Geneva, Switzerland. They created
a protocol, HyperText Transfer Protocol (HTTP), which standardized communication between servers and
clients. Their text-based Web browser was made available for general release in January 1992.
 The World Wide Web gained rapid acceptance with the creation of a Web browser called Mosaic,
which was developed in the United States by Marc Andreessen and others at the National Center for
Supercomputing Applications at the University of Illinois and was released in September 1993. Mosaic
allowed people using the Web to use the same sort of “point-and-click” graphical manipulations that
had been available in personal computers for some years. In April 1994 Andreessen
cofounded Netscape Communications Corporation, whose Netscape Navigator became the dominant
Web browser soon after its release in December 1994. BookLink Technologies’ InternetWorks, the first
browser with tabs, in which a user could visit another Web site without opening an entirely new window,
debuted that same year. By the mid-1990s the World Wide Web had millions of active users.
 The software giant Microsoft Corporation became interested in supporting Internet applications on
personal computers and developed its own Web browser (based initially on Mosaic), Internet
Explorer (IE), in 1995 as an add-on to the Windows 95 operating system. IE was integrated into the
Windows operating system in 1996 (that is, it came “bundled” ready-to-use within the operating system of
personal computers), which had the effect of reducing competition from other Internet browser
manufacturers, such as Netscape. IE soon became the most popular Web browser.
 Apple’s Safari was released in 2003 as the default browser on Macintosh personal computers and later
on iPhones (2007) and iPads (2010). Safari 2.0 (2005) was the first browser with a privacy mode, Private
Browsing, in which the application would not save Web sites in its history, downloaded files in its cache, or
personal information entered on Web pages.
 The first serious challenger to IE’s dominance was Mozilla’s Firefox, released in 2004 and designed to
address issues with speed and security that had plagued IE. In 2008 Google launched Chrome, the first
browser with isolated tabs, which meant that when one tab crashed, other tabs and the whole browser
would still function. By 2013 Chrome had become the dominant browser, surpassing IE and Firefox in
popularity. Microsoft discontinued IE and replaced it with Edge in 2015.
 In the early 21st century, smartphones became more computer-like, and more-advanced services, such
as Internet access, became possible. Web usage on smartphones steadily increased, and in 2016 it
accounted for more than half of Web browsing.
Security

 Device/Host Level security:


-OS Security
-Firmware/Hardware security
-Application Security
-Data Security

 Network Level security


-Application Security
-Protocol Security
-All layer of OSI/TCP-IP model
-Data Security
Network Security
Network security is the process of taking preventative measures to protect the underlying
networking infrastructure from unauthorized access, misuse, malfunction, modification, destruction
or improper disclosure.

The primary goal of network security are Confidentiality, Integrity, and Availability. These three
pillars of Network Security are often represented as CIA triangle.
 Confidentiality − The function of confidentiality is to protect precious business data from
unauthorized persons. Confidentiality part of network security makes sure that the data is
available only to the intended and authorized persons.
 Integrity − This goal means maintaining and assuring the accuracy and consistency of data.
The function of integrity is to make sure that the data is reliable and is not changed by
unauthorized persons.
 Availability − The function of availability in Network Security is to make sure that the data,
network resources/services are continuously available to the legitimate users, whenever they
require it.
Network Security Attacks
A network attack is an attempt to gain unauthorized access to an organization’s network, with the objective of
stealing data or perform other malicious activity. There are two main types of network attacks:

Passive: Attackers gain access to a network and can monitor or steal sensitive information, but without
making any change to the data, leaving it intact.
Active: Attackers not only gain unauthorized access but also modify data, either deleting,
encrypting or otherwise harming it.
Some Network Attacks

1. Malware/Ransomware
2. Botnets
3. Computer Viruses and Worms
4. Phishing Attacks
5. DDoS (Distributed Denial of Service)
6. Sql Injection
7. Social Engineering Attacks
8. Man-in-the- Middle Attacks
Goals of Security

• Prevention
• Detection
• Recovery
Network Security Measures
1. Use strong passwords and multi factor authentication
2. Access Control Policies
3. Put up a firewall
4. Use security software- anti-spyware, anti-malware and anti-virus
5. Update programs and systems regularly
6. Monitor for intrusion or IDS/IPS
7. Email security
8. Endpoint security
9. Network segmentation
10. Security information and event management (SIEM)
11. Virtual private network (VPN)
12. Web security
13. Wireless security
14. Raise awareness
15. 15. Log management:
1. Use strong passwords and multi factor authentication:
Strong passwords are vital to good online security. Make your password difficult to guess by:
 using a combination of capital and lower-case letters, numbers and symbols
 making it between eight and 12 characters long
 avoiding the use of personal data
 changing it regularly
 never using it for multiple accounts
 using two factor authentication

2. Access Control Policies


Make sure that individuals can only access data and services for which they are authorised.
For example, you can:
 control physical access to premises and computers network
 restrict access to unauthorised users
 limit access to data or services through application controls
 restrict what can be copied from the system and saved to storage devices
 limit sending and receiving of certain types of email attachments
3. Put up a firewall
-Firewalls are effectively gatekeepers between your computer and the
internet, and one of the major barriers to prevent the spread of cyber
threats such as viruses and malware. Make sure that you set up your firewall
devices properly, and check them regularly to ensure they have the latest
software/firmware updates installed.
-Firewalls function much like gates that can be used to secure the borders
between your network and the internet. Firewalls are used to manage
network traffic, allowing authorized traffic through while blocking access to
non-authorized traffic.

4. Use security software- anti-spyware, anti-malware and anti-virus


-You should use security software, such as anti-spyware, anti-malware and anti-virus
programs, to help detect and remove malicious code if it slips into your network.
-Malware, in the form of viruses, trojans, worms, keyloggers, spyware, etc. are designed to
spread through computer systems and infect networks. Anti-malware tools are a kind of
network security software designed to identify dangerous programs and prevent them from
spreading. Anti-malware and antivirus software may also be able to help resolve malware
infections, minimizing the damage to the network.
5. Update programs and systems regularly:
Updates contain vital security upgrades that help protect against known bugs and vulnerabilities.
Make sure that you keep your software and devices up-to-date to avoid falling prey to criminals.

6. Monitor for intrusion or IDS/IPS:


-You can use intrusion detectors to monitor system and unusual network activity. If a detection
system suspects a potential security breach, it can generate an alarm, such as an email alert, based
upon the type of activity it has identified.
-It can be difficult to identify anomalies in your network without a baseline understanding of
how that network should be operating. Network anomaly detection engines (ADE) allow
you to analyze your network, so that when breaches occur, you’ll be alerted to them
quickly enough to be able to respond.
-Intrusion prevention systems constantly scan and analyze network traffic/packets, so that
different types of attacks can be identified and responded to quickly. These systems often keep a
database of known attack methods, so as to be able to recognize threats immediately.
7. Email security:
Email security is focused on shoring up human-related security weaknesses. Via phishing strategies
(which are often very complex and convincing), attackers persuade email recipients to share sensitive
information via desktop or mobile device, or inadvertently download malware into the targeted network.
Email security helps identify dangerous emails and can also be used to block attacks and prevent the
sharing of vital data.

8. Endpoint security:

-Endpoint security refers to securing endpoints, or end-user devices like desktops, laptops, and mobile
devices. Endpoints serve as points of access to an enterprise network and create points of entry that can
be exploited by malicious actors.
-Endpoint security software protects these points of entry from risky activity and/or malicious attack.
When companies can ensure endpoint compliance with data security standards, they can maintain
greater control over the growing number and type of access points to the network.
9. Network segmentation
There are many kinds of network traffic, each associated with different security risks. Network
segmentation allows you to grant the right access to the right traffic, while restricting traffic
from suspicious sources.

10. Security information and event management (SIEM)

Sometimes simply pulling together the right information from so many different tools and
resources can be prohibitively difficult — particularly when time is an issue. SIEM tools and
software give responders the data they need to act quickly.

11. Virtual private network (VPN)

VPN tools are used to authenticate communication between secure networks and an
endpoint device. Remote-access VPNs generally use IPsec or Secure Sockets Layer (SSL)
for authentication, creating an encrypted line to block other parties from eavesdropping.
12. Web security:
Including tools, hardware, policies and more, web security is a blanket term to describe the
network security measures businesses take to ensure safe web use when connected to an
internal network. This helps prevent web-based threats from using browsers as access points
to get into the network.

13. Wireless security:


Generally speaking, wireless networks are less secure than traditional networks. Thus, strict
wireless security measures are necessary to ensure that threat actors aren’t gaining access.

14. Raise awareness


Your employees have a responsibility to help keep your business secure. Make sure that they
understand their role and any relevant policies and procedures, and provide them with
regular cyber security awareness and training.

15. Log management:


Log management is a security control which addresses all system and network logs.
Here’s a high-level overview of how logs work: each event in a network generates data, and
that information then makes its way into the logs, records which are produced by operating
systems, applications and other devices. Logs are crucial to security visibility. If organizations fail
to collect, store, and analyze those records, they could open themselves to digital attacks.
Social media: Facebook, signal, twitter, researchgate, google scholar,
whatsapp, Linkedin
E-banking: Paytm, google pay, net banking
Open access Research platform: researchgate, google scholar, IEEE,
ACM, SPRINGER
Email service provider: gmail, rediff, Hotmail
E-commerce: amazon, myntra, flipkart
Some tools

 Wireshark
 Metasploit
 CVE
 Nmap
 Acunetix web vulnerability scanner
 Google dorking
Assignment Presentation (4 march)

 Group 1: Facebook, Twitter, Linkedin, youtube


 Group 2: Paytm, Gpay, PhonePe
 Group 3: researchgate, google scholar, IEEE
 Group 4: gmail, rediff, yahoo mail
 Group 5: amazon, myntra, flipkart
Thank you for your Time & Support

You might also like