Introduction and Web Development Strategies

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

UNIT 1

Introduction and web Development Strategies

History of the World Wide Web


The World Wide Web ("WWW", "W3" or simply "the Web") is a
global information medium that users can access via computers connected to
the Internet. The term is often mistakenly used as a synonym for the Internet, but the
Web is a service that operates over the Internet, just as email and Usenet do.
The history of the Internet and the history of hypertext date back significantly further
than that of the World Wide Web.
Tim Berners-Lee invented the World Wide Web while working at CERN in 1989.
He proposed a "universal linked information system" using several concepts and
technologies, the most fundamental of which was the connections that existed
between information.
He developed the first web server, the first web browser, and a document
formatting protocol, called Hypertext Markup Language (HTML). After
publishing the markup language in 1991, and releasing the browser source code for
public use in 1993, many other web browsers were soon developed, with Marc
Andreessen's Mosaic (later Netscape Navigator), being particularly easy to use and
install, and often credited with sparking the Internet boom of the 1990s. It was a
graphical browser which ran on several popular office and home computers, bringing
multimedia content to non-technical users by including images and text on the same
page.
Websites for use by the general public began to emerge in 1993–94. This spurred
competition in server and browser software, highlighted in the Browser wars which
was initially dominated by Netscape Navigator and Internet Explorer. Following the
complete removal of commercial restrictions on Internet use by 1995,
commercialization of the Web amidst macroeconomic factors led to the dot-com
boom and bust in the late 1990s and early 2000s.
The features of HTML evolved over time, leading to HTML version 2 in 1995,
HTML3 and HTML4 in 1997, and HTML5 in 2014. The language was extended
with advanced formatting in Cascading Style Sheets (CSS) and
with programming capability by JavaScript. AJAX programming delivered
dynamic content to users, which sparked a new era in Web design, styled Web 2.0.
The use of social media, becoming common-place in the 2010s, allowed users to
compose multimedia content without programming skills, making the Web ubiquitous
in every-day life.
1989–1991: Origins
CERN
The NeXT Computer used by Tim Berners-Lee at CERN became the first Web
server.The corridor where the World Wide Web was born, on the ground floor of
building No. 1 at CERN
Where the WEB was born While working at CERN, Tim Berners-Lee became
frustrated with the inefficiencies and difficulties posed by finding information stored
on different computers.[11] On 12 March 1989, he submitted a memorandum, titled
"Information Management: A Proposal", to the management at CERN. The proposal
used the term "web" and was based on "a large hypertext database with typed links".
It described a system called "Mesh" that referenced ENQUIRE, the database and
software project he had built in 1980, with a more elaborate information management
system based on links embedded as text: "Imagine, then, the references in this
document all being associated with the network address of the thing to which they
referred, so that while reading this document, you could skip to them with a click of
the mouse." Such a system, he explained, could be referred to using one of the
existing meanings of the word hypertext, a term that he says was coined in the 1950s.
Berners-Lee notes the possibility of multimedia documents that include graphics,
speech and video, which he terms hypermedia.
Although the proposal attracted little interest, Berners-Lee was encouraged by his
manager, Mike Sendall, to begin implementing his system on a newly
acquired NeXT workstation. He considered several names, including Information
Mesh, The Information Mine or Mine of Information, but settled on World Wide Web.
Berners-Lee found an enthusiastic supporter in his colleague and fellow hypertext
enthusiast Robert Cailliau who began to promote the proposed system throughout
CERN. Berners-Lee and Cailliau pitched Berners-Lee's ideas to the European
Conference on Hypertext Technology in September 1990, but found no vendors who
could appreciate his vision.
Berners-Lee's breakthrough was to marry hypertext to the Internet. In his
book Weaving The Web, he explains that he had repeatedly suggested to members of
both technical communities that a marriage between the two technologies was
possible. But, when no one took up his invitation, he finally assumed the project
himself. In the process, he developed three essential technologies:

a system of globally unique identifiers for resources on the Web and elsewhere,
the universal document identifier (UDI), later known as uniform resource
locator (URL);
the publishing language Hypertext Markup Language (HTML);
the Hypertext Transfer Protocol (HTTP).
With help from Cailliau he published a more formal proposal on 12 November 1990
to build a "hypertext project" called World Wide Web (abbreviated "W3") as a "web"
of "hypertext documents" to be viewed by "browsers" using a client–server
architecture.
The proposal was modelled after the Standard Generalized Markup
Language (SGML) reader Dynatext by Electronic Book Technology, a spin-off from
the Institute for Research in Information and Scholarship at Brown University. The
Dynatext system, licensed by CERN, was considered too expensive and had an
inappropriate licensing policy for use in the general high energy physics community,
namely a fee for each document and each document alteration.[citation needed]
At this point HTML and HTTP had already been in development for about two
months and the first web server was about a month from completing its first
successful test. Berners-Lee's proposal estimated that a read-only Web would be
developed within three months and that it would take six months to achieve "the
creation of new links and new material by readers, [so that] authorship becomes
universal" as well as "the automatic notification of a reader when new material of
interest to him/her has become available".
By December 1990, Berners-Lee and his work team had built all the tools necessary
for a working Web: the HyperText Transfer Protocol (HTTP), the HyperText Markup
Language (HTML), the first web browser (named WorldWideWeb, which was also
a web editor), the first web server (later known as CERN httpd) and the first web
site (http://info.cern.ch) containing the first web pages that described the project itself
was published on 20 December 1990.

1991–1994: The Web goes public, early growth


Initial launch
In January 1991, the first web servers outside CERN were switched on. On 6 August
1991, Berners-Lee published a short summary of the World Wide Web project on
the newsgroup alt.hypertext, inviting collaborators.
Paul Kunz from the Stanford Linear Accelerator Center (SLAC) visited CERN in
September 1991, and was captivated by the Web. He brought the NeXT software back
to SLAC, where librarian Louise Addis adapted it for the VM/CMS operating system
on the IBM mainframe as a way to host the SPIRES-HEP database and display
SLAC's catalog of online documents. This was the first web server outside of Europe
and the first in North America.
The World Wide Web had several differences from other hypertext systems available
at the time. The Web required only unidirectional links rather than bidirectional ones,
making it possible for someone to link to another resource without action by the
owner of that resource. It also significantly reduced the difficulty of implementing
web servers and browsers (in comparison to earlier systems), but in turn, presented
the chronic problem of link rot.

1994–2004: Open standards, going global


The rate of web site deployment increased sharply around the world, and fostered
development of international standards for protocols and content formatting. Berners-
Lee continued to stay involved in guiding web standards, such as the markup
languages to compose web pages, and he advocated his vision of a Semantic
Web (sometimes known as Web 3.0) based around machine-readability and
interoperability standards.
World Wide Web Conference
In May 1994, the first International WWW Conference, organized by Robert Cailliau,
was held at CERN; the conference has been held every year since.

World Wide Web Consortium


The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he
left the European Organization for Nuclear Research (CERN) in September/October
1994 in order to create open standards for the Web. It was founded at
the Massachusetts Institute of Technology Laboratory for Computer Science
(MIT/LCS) with support from the Defense Advanced Research Projects
Agency (DARPA), which had pioneered the Internet. A year later, a second site was
founded at INRIA (a French national computer research lab) with support from
the European Commission; and in 1996, a third continental site was created in Japan
at Keio University.
The W3C published the standard for HTML 4 in 1997, which included Cascading
Style Sheets (CSS), giving designers more control over the appearance of web pages
without the need for additional HTML tags.
Web Development Strategies
Seven Effective Web Development Strategies

Mainly seven super important web development tricks:

Responsive Web Design: Make your site look good on everything, from
computers to phones.

Performance Optimization: Make your site fast so people don't get bored
waiting.

User-Centered Design: Design your site for people, not robots.

Security Measures: Keep your site safe from sneaky online troublemakers.

Cross-Browser Compatibility: Ensure your site works on all web browsers.

Mobile-First Development: Think about phones when building your site.

Content Management and SEO: Write good stuff and help people find it.

Search engine optimization (SEO) is the practice of orienting your website to rank
higher on a search engine results page (SERP) so that you receive more traffic.
The most common example of on-page SEO is optimizing a piece of content to a
specific keyword. For example, if you're publishing a blog post about making
your own ice cream, your keyword might be “homemade ice cream.” You'd
include that keyword in your post's title, slug, meta description, headers, and
body.

Our web strategy and planning service includes:


(i) Site architecture is the map of your website. No matter what you call it—content
planning, information architecture, site map—the site architecture is literally the
backbone of a good website. At this stage you are not thinking about design or copy.
Instead, you are carefully considering your goals for the website and what type of
content you'll need to meet those goals. There is software out there to help you create
a site map, but little technology is needed for this stage. In fact, a lot of people like to
use index cards or large flow charts for brainstorming.
(ii) Wireframes are a sketch of your web page. A wireframe allows you to take the
carefully considered site architecture and put it into the basic framework of a website.
It helps you start to consider the flow for your user and assign appropriate prominence
to content. A well-designed wireframe will become the guide for the design and
content phases to follow. Taking the time to create wireframes will save you time in
the development process.
(iii) Style tiles help clients get a feel for design elements before full compositions are
created. This step helps streamline the website design process. Style tiles can be used
when a wireframe might not be detailed enough, but a full design mock-up would be
too much. The purpose of style tiles is not to represent the final version of a website,
but to set its overall look and feel.
(iv) Audience personas are your initial definition of each target market and, as such,
will guide your decisions about features, functionality and message points. Using past
market research, client demographics and the knowledge of your sales and marketing
staff, we’ll help you create personalities that will keep the project focused and geared
towards your audience.
Services
discovery & consultation
site architecture
wireframing
style tiles
audience personas

Protocols governing the web


A Protocols is a set of rules. Protocols allows two computers to communicate over
media such as wireless or hardwired technologies.

Protocol Stack

When computers communicate with each other,there needs to be a common set of


rules and instruction that each computer follows.
protocols are the language of computers.
Top 9 HTTP Methods

The HTTP method defines the action the client wants to take on the requested
resource at the given URI. HTTP request methods are usually referred to as verbs,
although they can also be nouns. Each HTTP method implements different semantics,
but some common properties are common to them: for example, they can be secure,
idempotent, or cacheable.

Method Action
GET Retrieve information from the server. Should not modify the data on the server. It can be
cached, bookmarked, and may remain in the browser history.
HEAD Similar to GET, except it transfers the status line and headers only. Should not modify
the data on the server. It cannot be bookmarked and does not remain in the browser
history.
POST Send data to the server, including images, JSON strings, file downloads, etc. It cannot
be cached, bookmarked, and not stored in the browser history.
PUT Replace or update an existing resource on the server. May change server status. It
cannot be cached, bookmarked, and not stored in browser history.
PATCH Partially modify the specified resource on the server. It is faster and requires less
resources than the PUT method. It cannot be cached, bookmarked, and not stored in
browser history.
DELETE Delete a resource from the server. May change server status. It cannot be cached,
bookmarked, and not stored in browser history.
OPTIONS Used by browsers for CORS operations. Describes the communication options available
for the requested resource. Does not change data on the server. It cannot be cached,
bookmarked, and not stored in browser history.
CONNECT Establishes two-way communication with the server by creating an HTTP tunnel
through a proxy server.
TRACE It is designed for diagnostic purposes. When used, the web server sends back to the
client the exact request that was received.

Some types of Protocols:-

(Internet layer)

Hyper Text Transfer Protocols (HTTP) :

This protocol is used to access,send and receive Hypertext markup language


(HTML) files on the Internet.

Simple Mail Transfer Protocol(SMTP) :

This protocol is used for transferring e-mail between computers.

File Transfer Protocol(FTP):

FTP is used to upload files on server and download files from server.

Transmission Control Protocol(TCP):


This protocol ensure the delivery of information packets across network.

Internet Protocol(IP):

This protocol is responsible logical addressing called IP address to route information


between network.

Below is a list of protocols used for the world wide web:


ARP: Address Resolution Protocol: ARP is a network protocol that translates
between a computer's MAC address and IP address.Address Resolution Protocol
(ARP) is a protocol or procedure that connects an ever-changing Internet Protocol
(IP) address to a fixed physical machine address, also known as a media access
control (MAC) address, in a local-area network (LAN).

This mapping procedure is important because the lengths of the IP and MAC
addresses differ, and a translation is needed so that the systems can recognize one
another. The most used IP today is IP version 4 (IPv4). An IP address is 32 bits long.
However, MAC addresses are 48 bits long. ARP translates the 32-bit address to 48
and vice versa.

There is a networking model known as the Open Systems Interconnection (OSI)


model. First developed in the late 1970s, the OSI model uses layers to give IT teams a
visualization of what is going on with a particular networking system. This can be
helpful in determining which layer affects which application, device, or software
installed on the network, and further, which IT or engineering professional is
responsible for managing that layer.

The MAC address is also known as the data link layer, which establishes and
terminates a connection between two physically connected devices so that data
transfer can take place. The IP address is also referred to as the network layer or the
layer responsible for forwarding packets of data through different routers. ARP works
between these layers.

MAC address is a unique identifier of a network device at the data link layer. It is a
48-bit hardware number that works at the Media Access Control sublayer of the Data
Link Layer

DHCP: : (Dynamic Host Configuration Protocol) is a protocol that provides


quick, automatic, and central management for the distribution of IP addresses
within a network.
FTP: File Transfer Protocol
HTTP: Hypertext Transfer Protocol
IMAP: Internet Message Access Protocol
ICMP: Internet Control Message Protocol
IDRP: ICMP Router-Discovery Protocol
IP: Internet Protocol
IRC: Internet Relay Chat Protocol
POP3: Post Office Protocol version 3
SMTP: Simple Mail Transfer Protocol
SSL: Secure Sockets Layer:SSL (Secure Sockets Layer) is a security
protocol used to establish a secure and encrypted connection between a client and a
server over the internet or any other network.
SSH: Secure Shell:Secure Shell (SSH) : It is a cryptographic network
protocol used to access the network devices and servers over the internet.
TCP: Transmission Control Protocol
TELNET: TCP/IP Terminal Emulation Protocol
UPD: User Datagram Protocol
What is the difference between SSH vs SSL?
The key difference between SSH vs SSL is that SSH is used for creating a secure
tunnel to another computer from which you can issue commands, transfer data, etc.
On the other end, SSL is used for securely transferring data between two parties – it
does not let you issue commands as you can with SSH. For example, let’s say you’re
on your laptop.

Web Design Process


Web designers often think about the web design process with a focus on technical
matters such as wireframes, code, and content management. But great design isn’t
about how you integrate the social media buttons or even slick visuals. Great design is
actually about having a website creation process that aligns with an overarching
strategy.

Well-designed websites offer much more than just aesthetics. They attract visitors and
help people understand the product, company, and branding through a variety of
indicators, encompassing visuals, text, and interactions. That means every element of
your site needs to work towards a defined goal.

But how do you achieve that harmonious synthesis of elements? Through a holistic
web design process that takes both form and function into account.

For me, steps to design a website requires 7 steps:

1. Goal identification: Where I work with the client to determine what goals the
new website needs to fulfill. I.e., what its purpose is.
2. Scope definition: Once we know the site's goals, we can define the scope of
the project. I.e., what web pages and features the site requires to fulfill the
goal, and the timeline for building those out.
3. Sitemap and wireframe creation: With the scope well-defined, we can start
digging into the sitemap, defining how the content and features we defined in
scope definition will interrelate.
4. Content creation: Now that we have a bigger picture of the site in mind, we
can start creating content for the individual pages, always keeping search
engine optimization (SEO) in mind to help keep pages focused on a single
topic. It's vital that you have real content to work with for our next stage:
5. Visual elements: With the site architecture and some content in place, we can
start working on the visual brand. Depending on the client, this may already be
well-defined, but you might also be defining the visual style from the ground
up. Tools like style tiles, moodboards, and element collages can help with this
process.
6. Testing: By now, you've got all your pages and defined how they display to
the site visitor, so it's time to make sure it all works. Combine manual
browsing of the site on a variety of devices with automated site crawlers to
identify everything from user experience issues to simple broken links.
7. Launch: Once everything's working beautifully, it's time to plan and execute
your site launch! This should include planning both launch timing and
communication strategies — i.e., when will you launch and how will you let
the world know? After that, it's time to break out the bubbly.

What is client server architecture?


Before we explain client server architecture and you start reading words such as
servers, service, network, data, and files, and start feeling overwhelmed with jargon,
let us first understand about this architecture in layperson’s terms.

The notion of client-server architecture can be understood by the analogy of ordering


a pizza for delivery. You call the store to order a pizza and someone picks up the call,
takes your order, and then delivers it. Simple, right? Yes, this analogy pretty much
answers the fundamental principle of client server architecture.

Simply put, two factors are involved :

A server is the one who provides requested services.


Clients are the ones who request services.

Client-server architecture is a computing model in which the server hosts, delivers,


and manages most of the resources and services requested by the client. It is also
known as the networking computing model or client-server network as all requests
and services are delivered over a network. The client-server architecture or model has
other systems connected over a network where resources are shared among the
different computers.

Typically, client-server architecture is arranged in a way that clients are often situated
at workstations or on personal computers, while servers are located elsewhere on the
network, usually on more powerful machines. Such a model is especially beneficial
when the clients and server perform routine tasks. For example, in hospital data
processing, a client computer can be busy running an application program for entering
patient information, meanwhile, the server computer can be running another program
to fetch and manage the database in which the information is permanently stored.

Client-Server architecture example

Here are some of the client-server model architecture examples from our daily life.
Hope it helps you to understand the concept better.

Mail servers

Email servers are used for sending and receiving emails. There are different software
that allows email handling.

File servers

File servers act as a centralized location for files. One of the daily life examples to
understand this is the files that we store in Google Docs. The cloud services for
Microsoft Office and Google Docs can be accessed from your devices; the files that
you save from your computer can be accessed from your phone. So, the centrally
stored files can be accessed by multiple users.

Web servers

Web servers are high-performance computers that host different websites. The server
site data is requested by the client through high-speed internet.

Components of Client-Server architecture:

Essentially, three components are required to make client-server architecture work.


The three components are workstations, servers, and networking devices. Let us, now,
discuss them in detail:

Workstations

Workstations are also called client computers. Workstations work as subordinates to


servers and send them requests to access shared files and databases. A server requests
information from the workstation and performs several functions as a central
repository of files, programs, databases, and management policies. Workstations are
governed by server-defined policies.

Servers

Servers are defined as fast-processing devices that act as centralized repositories of


network files, programs, databases, and policies. Servers have huge storage space and
robust memory to deal with multiple requests, approaching simultaneously from
various workstations. Servers can perform many roles, such as mail server, database
server, file server, and domain controller, in client-server architecture at the same
time.

Networking devices

Now that we know about the roles that workstations and servers play, let us learn
about what connects them, networking devices. Networking devices are a medium
that connects workstations and servers in a client-server architecture. Many
networking devices are used to perform various operations across the network. For
example, a hub is used for connecting a server to various workstations. Repeaters are
used to effectively transfer data between two devices. Bridges are used to
isolate network segmentation.

Components of Client-Server architecture:

Essentially, three components are required to make client-server architecture work.


The three components are workstations, servers, and networking devices. Let us, now,
discuss them in detail:

Workstations

Workstations are also called client computers. Workstations work as subordinates to


servers and send them requests to access shared files and databases. A server requests
information from the workstation and performs several functions as a central
repository of files, programs, databases, and management policies. Workstations are
governed by server-defined policies.

Servers

Servers are defined as fast-processing devices that act as centralized repositories of


network files, programs, databases, and policies. Servers have huge storage space and
robust memory to deal with multiple requests, approaching simultaneously from
various workstations. Servers can perform many roles, such as mail server, database
server, file server, and domain controller, in client-server architecture at the same
time.

Networking devices

Now that we know about the roles that workstations and servers play, let us learn
about what connects them, networking devices. Networking devices are a medium
that connects workstations and servers in a client-server architecture. Many
networking devices are used to perform various operations across the network. For
example, a hub is used for connecting a server to various workstations. Repeaters are
used to effectively transfer data between two devices. Bridges are used to
isolate network segmentation.

Types of Client-Server Architecture

The functionality of Client-Server architecture is in various tiers.

1-tier architecture

The architecture in this specific client-server category incorporates various settings,


including configuration settings and marketing logic, within a single device. Although
the wide range of services provided by the 1-tier architecture establishes it as a
dependable resource, managing such an architecture proves challenging. This
difficulty primarily arises from the variability of data, often leading to duplicated
efforts. The 1-tier architecture comprises multiple layers, such as the presentation
layer, business layer, and data layer, which are unified through a specialized software
package. The data residing within this layer is typically stored either in local systems
or on a shared drive.

Unleash the potential of Network Topology. Explore the advantages of


hierarchical, ring, and hybrid topologies, and design a network that meets your
specific requirements.

2-tier architecture

The best environment is possessed by this architecture, where the client’s side stores
the user interface and the server houses the database, while either the client’s side or
the server’s side manages the database logic and business logic.
The 2-tier architecture outpaces the 1-tier architecture due to its absence of
intermediaries between the client and server. Its primary application is to eliminate
client confusion, and an instance of its popularity lies in the online ticket reservation
system.

3-tier architecture

Unlike 2-tier architecture that has no intermediary, in 3-tier client-server architecture,


middleware lies between the client and the server. If the client places a request to
fetch specific information from the server, the request will first be received by the
middleware. It will then be dispatched to the server for further action. The same
pattern will be followed when the server sends a response to the client. The
framework of 3-tier architecture is categorized into three main layers, presentation
layer, application layer, and database tier.

All three layers are controlled at different ends. While the presentation layer is
controlled at the client’s device, the middleware and the server handle the application
layer and the database tier respectively. Due to the presence of a third layer that
provides data control, 3-tier architecture is more secure, has invisible database
structure, and provides data integrity.

N-tier architecture

N-tier architecture is also called multi-tier architecture. It is

the scaled form of the other three types of architecture. This architecture has a
provision for locating each function as an isolated layer that includes presentation,
application processing, and management of data functionalities.
Internet and its History:
In its infancy stage the Internet was originally conceived by the Department of
Defense as a way to protect government communication systems in the event of
military strike. The original network dubbed ARPANet (for the Advanced Research
Projects Agency that developed it) evolved in to a communication channel among
contractors, military personnel and university researchers who were contributing to
ARPA projects.

The network employed a set of standard protocols to create an effective way for
these people to communicate and share data with each other. ARPAnet’s popularity
continued to spread among researchers and in 1980 the National Science Foundation
linked several high speed computers and took charge of what came known to be as the
Internet. By the late 1980’s thousands of cooperating networks were participating in
the Internet. The NREN (National Research Education Network) took up the initiative
to develop and maintain high speed networks for research and education and to
investigate the commercial uses of the Internet.

Major Internet tools and services

Internet is a worldwide collection of networks. The Internet has different tools and
services that are provided:

E-mail
Voice mail
FTP
WWW
E-Commerce
Chat
Search Engine
Electronic mail (email): E-mail is an electronic mail. The messages can be sent
electronically over a network. For sending or receiving an email, the user must have
an email address; email address is given as:

username@location

Username: It is the recipient’s email name.


@: It is a character, which is used to separate the email name an location.
Location: It is a place – the electronic post office, where the recipient’s mail is
delivered and stored.
Voice mail: Initially only text mails were being sent. If the voice information is to be
sent or received, then the user has to send it as an attachment file. But nowadays, the
new technology allows us to send and receive the voice data directly through the
Internet as a voicemail. The only requirement is that computer should have a
multimedia facility and voice mail software.
FTP (File Transfer Protocol): It is a fast application level TCP/IP protocol widely
used for transferring both text- based and binary files to and from remote systems
through the Internet. Most of the people use FTP program for downloading the
software on the Internet.
World Wide Web (WWW): In the Internet, different types of computer are connected
to each other. These computers may have different operating systems. When the data
is to be transformed from one computer to other computer and if the operating
systems of both computers are different, then both operating systems should
understand the data format which is to be transferred. The www provides an
interactive document and the software to access the data on any computer.
E-Commerce: Web technologies play a very important role in business. Websites are
created to perform the business. Online trading is now a very important feature of the
Internet. It is also called as e-commerce. So the geographical boundaries have become
faint due to Internet and e-commerce.
Chat: It is an Internet application. Using the program, the user needs to get connected
in a chat. The user has to log in the chat- room, get access in a particular chat room,
find other users connected in a chat room and start chatting with the users connected
in that room.
Search Engine: It is used to search the required information over the Internet. This is
possible by getting the home pages or websites dedicated to the particular subject.
There are different types of search engines on the type of search criteria for indexing
pages and returning results. Size of the index, review of web pages, links with
priorities, net tags, importace of pages are the categories for getting the different
search approaches of the search engines.

You might also like