Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

Systems of the Knowledge-Based Society

Critical Review of Learning Resources

Name: Alan McNamara

Student Number: 59406344

Module Code: EE233

Date of Submission: 28/4/11

Plagiarism Declaration:
I declare that this material, which I now submit for assessment, is entirely my own work
and has not been taken from the work of others, save and to the extent that such work
has been cited and acknowledged within the text of my work. I understand that
plagiarism, collusion, and copying is a grave and serious offence in the university and
accept the penalties that would be imposed should I engage in plagiarism, collusion, or
copying. I have read and understood the Assignment Regulations set out in the module
documentation. I have identified and included the source of all facts, ideas, opinions,
viewpoints of others in the assignment references. Direct quotations from books, journal
articles, internet sources, module text, or any other source whatsoever are
acknowledged and the source cited are identified in the assignment references. This
assignment, or any part of it, has not been previously submitted by me or any other
person for assessment on this or any other course of study. I have read and understood
the referencing guidelines recommended in the briefing for this assignment.
Introduction:
In this report, I have set out to give both a concise summary of each of the listed resources,
and also to critically evaluate them having consulted a number of other sources on the same
subject matter. The overall aim of my research was to in turn to come to a conclusion as to
what a “knowledge-based society” means to me. Research into a “knowledge-based
society” will define it as “the type of society that is needed to compete and succeed in the
changing economic and political dynamics of the modern world. It refers to societies that
are well educated, and who therefore rely on the knowledge of their citizens to drive the
innovation, entrepreneurship and dynamism of that society’s economy”. To me though, a
knowledge-based society is one that is driven by knowledge, and the desire for knowledge,
which allows us to remain at the forefront of global issues, such as politics and economics.
This knowledge allows us to keep moving forward as new technological advancements are
made every day, and old technology is becoming obsolete at an alarming rate. This, to me, is
incredibly exciting and new innovations have a huge impact on our lives, and in some cases
change our lives forever.

Resource 1: How Internet Infrastructure Works


Summary:

This first resource gives quite an in-depth insight into how the internet works. In doing so, it
also gives a good account of the components needed in order for the global connection of
networks, which form the single entity of the internet. The article initially gives us a general
background to the internet, focusing on the notion that it is an entity that is, incredibly, not
owned by anybody, but instead regulated by the Internet Society, which was founded in
1992. As the article progresses, the reader is given an increasingly detailed look at how
different components combine in order for the internet to carry out the tasks we take for
granted every day. Firstly, the article focuses on the functions of internet routers, which are
directly responsible for sending information from one computer to another through a series
of pathways or networks, and also prevents traffic from spilling over from one network to
another, meaning it plays an essential part in the overall infrastructure of the internet.
According to the article, the backbone of the infrastructure of the internet is provided by
fiber optic trunk lines, with the first being developed by the National Science Foundation
(NSF). Nowadays, many companies operate their own high-capacity backbones, allowing
them to communicate freely with anyone else in the world. We are then given quite a
detailed look at how URLs allow us to connect to IP addresses simply using a website name
(for example howstuffworks.com), and how this is facilitated by domain name servers or
DNS. Finally, the resource provides an insight into the importance of ports in the
infrastructure of the internet as internet users connect to a different port number every
time they request a service that it provides.

Critical Review:
Initially, I felt the article was a little basic and lacking in detail, in hindsight though I think it
serves as a good introduction to the information that follows, and gives a decent
background to the reader regarding the origins of the internet. The information that follows
was, in my opinion, well structured and easy to understand, hence satisfying the required
criteria for providing the reader with a basic explanation of a potentially over-complicated
subject. The focus shifts to the importance of routers, and their ability to send a message
from one computer to another through a series of pathways in a fraction of a second, even
if the other computer is on the other side of the world. However, I stumbled upon a slight
mistake in this section, which is probably a direct result of the aim of the article to simplify
the matter. The article states that one of the jobs of a router is to “make sure that
information does make it to the intended destination”, however, additional research has led
me to believe that this is in fact handled by higher level protocols such as the Transmission
Control Protocol (TCP). This seems to be the only discrepancy of note in the article though,
as we are given a good, detailed description of how the backbone of the internet is formed,
and finally an interesting example of how the modern internet protocol (IP) address is
simplified into decimal form for humans, but is communicated by computers through the
use of binary numbers.

Following on from this, I thought the simplified information on how domain name systems
(DNS) operate to be extremely helpful, and condensed a large amount of complicated
information very effectively. The howstuffworks description firstly gives an insight into the
evolution of connecting to other computers, from having to know each individual IP address
you wished to connect to, through to the DNS we have today whereby text names are
mapped to addresses automatically, as created by the University of Wisconsin. It then gives
solid information on how URLs tie in with this introduction of DNS, and gives an interesting
example of how when a user types “howstuffworks.com” into a browser the DNS sets about
connecting them to the desired website, which I found benefited the article greatly.

Resource 2: The World Wide Web in Plain English


Summary:

This video resource provides the viewer with a brief and basic overview of how we connect
to websites via browsers. Although it may be as basic as possible regarding how the web
works, it still provides a decent, illustrated look at how we view files on web browsers, by
inputting the URL of the desired website we wish to visit, and then how this information is
transferred from servers through the use of binary numbers in order to then display the
webpage for the visitor on their browser. It also goes into a bit of detail regarding the
navigation from one server to another through the use of links on web pages, but touches
on very little else of note in its relatively short running time.
Critical Review:

Since the obvious appeal of this particular resource is to provide a simplified description of
how the world wide web works, it is difficult to be too critical of it. However, the lack of
detail is difficult to excuse, this is partially down to the fact that I researched this resource
directly after the much more detailed “howstuffworks” article on the same subject. But even
so, most of the material covered in video would be common knowledge in this day of high
computer literacy. I, for one, learned next to nothing from this resource, which would lead
me to question its validity as a so-called “learning resource”. Having said this, it may prove a
useful tool for those with a very limited knowledge of how the web works, and its concise
and matter-of-fact tone serves to inform without confusing the viewer. I feel a little more
detail would hugely benefit this resource, even just by expanding upon how our internet
connection is provided through telephone lines and satellites, as opposed to simply stating
the fact and moving onto how we receive “packets of code” through web browsers. I also
located a similar video which sets out to inform on the same subject, and does so in much
more detail. I found this video resource, which despite not being too much longer in running
time, is much more informative on topics such as how URLs work, how HTML is used by both
the server and browsers, and also how files are transferred from servers to browsers.

Resource 3: How Internet Search Works


Summary:

This article sets out to provide the reader with a comprehensive look at how search engines
operate. It begins with a fairly basic introduction before describing how search engines
employ “spiders” to crawl the web and return search results for users, with different types
of spiders being used by different SERPs in order to return different types of results, i.e.
quicker, or more detailed. We are then introduced to the idea of “meta tags”, which are
pieces of code that are part of the HTML code of websites, but are becoming less and less
common amongst major SERPs, with Google’s spiders now only using “Google Meta Tags”
when searching for websites. Finally, we are told of how the search engine then builds an
index and a search, and finally a brief insight into the future of search.

Critical Review

Overall, I found this to be a useful learning resource. It its benefitted greatly by its
introduction, which lays a good basis for the rest of the article to build upon, as it ensures
the reader understands the basic principles of the article from the outset. The next section
on how different search engines employ different strategies in order to return search results
was, in my opinion, very interesting. Although it provides the reader with a good insight into
how these different strategies, such as the use of insignificant words, or lack thereof,
determine what results a user will receive, I found it to be a little speculative. For example,
when discussing web crawlers that analyse a relatively low amount of information in order
to return the quickest possible results, the article states “Lycos is said to use this approach”,
which is obviously not a definitive statement, and also fails to provide a source or reason for
basing this assumption upon, which I found to be quite detrimental to the article.
Independent research on this topic returned some beneficial results. According to a
different source, “The Lycos web crawler does not weigh META tags heavily. Indexing is
based on an algorithm that takes a look at components of the URL name, Meta title, text
body headings and subheadings, how frequently words appear, where these words appear
in relationship to one another, as well as a document abstract”. Although this is similar
information to that provided by the original “howstuffworks” article, it is much more
detailed and helpful as a learning resource, in my opinion.

Resource 4: How Search Engine Optimization Works


Summary:

This resource gives a brief and concise overview of search engine optimization (SEO),
how it works, and the techniques that can be used in order to improve a website’s position
on a search engine return page (SERP). In doing so, it gives the reader a brief background
into the need for webmasters to use SEO, such as increased traffic, which leads to increased
profits through advertisements and other forms of revenue. It also deals with how
webmasters can manipulate the SEO system through a series of white hat (legitimate)
techniques and black hat (illegitimate) techniques. Finally, the reader is provided with
obstacles that hinder SEO, namely the need to satisfy both the visitor to a particular
website, and the search engine’s “spiders”.

Critical Review:

I found this article to be quite informative and comprehensive on a topic that is far from
straightforward. The fascinating thing about SEO is that many of the “signals” used by SERPs
such as Google and Yahoo are unknown, but heavily speculated upon. Despite this, I felt
that the “howstuffworks” article provided a good overall insight into the world of SEO, along
with the “white hat” and “black hat” techniques being employed by webmasters in order to
get onto that coveted first page of search results. While some may be obvious, like putting
keywords in the title of the page, link analysis, or how many other websites link to the page
in question, is one that would not have occurred to me previously. The article also does a
good job of handling the illegitimate, or “black hat” techniques used to optimise a website’s
performance on an SERP. Keyword stuffing is one such technique in which keywords are
printed numerous times at the bottom of a webpage, or blended into the background where
a site visitor won’t see them, but a search engine spider or crawler, which reads the HTML
code, will and will then increase the page ranking. While this may seem beneficial initially,
SERPs are well aware of tactics such as these and will eventually ban such websites. This
renders black hat techniques unviable in the long term.

It is hard to criticise this article for being a little vague in places, as it is easy to understand
the reason for this. The exact “signals” taken into account for SEO is still kept well under
wraps by SERPs, and while this continues we can expect our knowledge of SEO to remain
limited. However, one thing that is for sure, and that is backed up by a number of different
sources, is that the most important factor of all is to ensure your website has the best
possible content on a certain subject that is available on the internet.

Resource 5: Web Analytics


Summary:

Web analytics is “the measurement, collection, analysis and reporting of internet data for
purposes of understanding and optimizing web usage”. In this detailed Wikipedia article, we
are initially given a broad overview of the main reasons for the use of web analytics, and the
benefits behind it, before being introduced to the technical details, and how it works.
According to the article, there are two types of web analytics; off-site and on-site. Off-site
concerns the measurement of a website’s potential audience, share of voice, and buzz or
comments on the internet as a whole, while on-site analytics deal with what visitors do once
they are actually on your website, i.e. which pages garner higher levels of traffic etc. Web
analytics arose through web server log file analysis, as it was discovered that date stored by
web servers could be read, and then analysed through the use of program which would then
provide information on the popularity of a certain website. Page tagging, a second data
collection service, was then developed as an alternative to log file analysis due to concerns
over the accuracy of the log file analysis in the presence of caching, which is a mechanism
for the temporary storage of information on the web, and also the desire to outsource web
analytics to a third party. The article then summarises the advantages and disadvantages of
both as methods of carrying out web analytics, with the main benefit of logfile analysis
being the easy availability of data, while page tagging’s ability to account for cached page
requests are a huge advantage, as cached pages can account for up to one-third of all page
requests. Following on from this, lesser-used methods of participating in web analytics are
briefly described, such as hybrid methods, which use a combination of both logfile analysis
and page tagging, click analytics, which determines how successful a website is based on
where users of the site are clicking, and lastly, customer lifecycle analytics which attempts
to connect all the data points together in order to try and get a sense of the visitor to the
site’s preferences. Finally, we are given an account of different factors which can lead to
confusion in web analytics. The first and most common of these for inexperienced users of
web analytics is the “hotel problem”, which is when the sum of users does not equal the
number of unique visitors which will be counted by web analytics software. Another
common problem listed is that new visitors and repeat visitors do not equal total visitors.
This is due to the fact that if a person visits a website for the first time on a given day, and
then revisits later they are both a new visitor and a repeat visitor on that day, leading to
inaccuracies in the count.

Critical Review:

I found this article to be incredibly useful as a learning resource. Not only was it detailed,
and provides the reader with a good account of many different aspects of web analytics, it
was very laid out, and easy to navigate through due to the links at the top of the page which
made it easy to locate the information I needed. The resource gave a good background to
the topic of web analytics, before giving quite a comprehensive overview of the different
methods of carrying it out, advantages of these methods, and also problems associated with
web analytics as a whole. The only real criticism I had of the article as a learning resource
was the noticeable lack of a list of disadvantages for both logfile analysis and page tagging,
as it chooses solely to focus on the advantages of each which makes it difficult for the user
to determine which of the two methods is the better or more accurate form of carrying out
web analytics. For this reason, I had to resort to other sources in order to learn more about
the disadvantages of each method, and why one would be chosen over the other depending
on what kind of information is desired. For example, log file analysis does not facilitate
event tracking and requires a large amounts of disk space are required in order to store
history. As for page tagging, it is unable to track completed, partial, or aborted downloads
and deleted or rejected cookies can lead to inaccurate information. However, from the
information in the Wikipedia article, it is clear to see that where one method of web
analytics fails, the other one succeeds, with both having their obvious advantages and
disadvantages. This had led me to believe that using both of these methods in conjunction
with each other would result in the most accurate results for webmasters, as they would
counter each other’s weaknesses and provide the best possible information. Also, another
criticism I have of the resource is that it refrains from providing examples of what types of
companies use log file analysis or page tagging. From more independent research into the
topic, I learned that page tagging is used primarily by outsourced companies, including
Google Analytics, while log file analysis is typically preferred by standalone vendors and
takes a lot more resources to maintain.

Resource 6: “Little Brother” by Cory Doctorow


Summary:

Little Brother follows the story of the 17 year old, diminutive, technology-loving Marcus
Yallow. From the outset we are explicitly shown how technology plays a huge role in
Marcus’ lie, as he skips school at the start of the novel in order to participate in “Harajuku
Fun Madness”, which is an internet-based scavenger hunt invented by him and his friends.
While participating in this there is a terrorist attack on San Francisco’s Bay Bridge, which
eventually leads to him and his friends being detained by the Department of Homeland
Security (DHS) for interrogation in the immediate aftermath of the attack. Following three
days of interrogation, Marcus and two of the three friends he began the game with are
released, with the fourth, Darryl nowhere to be seen. Upon returning home, Marcus
experiences a totally changed society to the one he left, one of unashamed vigilance and
surveillance. However, this is where one of Marcus’s strongest traits comes into play; his
self-confidence and cockiness. He uses his knowledge of technology in order to stay below
the radar and remain in contact with his group of friends and sets about plotting his revenge
upon the DHS for what he views as a humiliating and unnecessary detention, and also for his
friend Darryl, who is stabbed in the chaos that ensued following the terrorist attack. What
ensues is a tale of a small group of activists standing up for what they believe in, and who
are willing to do whatever it takes in order to accomplish their goals in the face of adversity.
The group of friends, led by Marcus, use their love and knowledge of technology in order to
create modified machinery that will allow them to stay out of trouble with the DHS, and
ultimately fight back against them. This becomes increasingly important as the story
develops due to Marcus’ growing feeling of paranoia, which becomes a main theme of the
novel. In the end, a win-win situation of sorts comes about. Marcus gets his revenge upon
the DHS, and also gets his friend Darryl back, while the DHS fines Marcus heavily and also
manages to expose his underground network he assumes through the story.

Critical Review:

Little Brother most definitely qualifies as a “coming of age” tale, as the events in the story
lead to Marcus standing up for himself and his close friends through the use of the tools he
trusts the most, his modified technology. The character development was, in my opinion,
done very well, as we see Marcus come out of his shell progressively as the story continues,
and also see how emotions such as paranoia and fear begin to torment him. There is no
doubting that technology plays a huge role in this story, it provides Marcus with an
overwhelming sense of self-confidence which then pervades him as a whole, leading him to
tackling the DHS, and ultimately succeeding with the love interest in the story, Van. Perhaps
the most interesting aspect of the story was, in my opinion, how the author uses both
existing and non-existing technology together in order to create a story that is both exciting
and interesting. A perfect example of this is the way in which Marcus creates his own Linux
setup, which he calls the ParanoidLinux, in conjunction with his Xbox Universal in order to
communicate with his friends by creating the ParanoidXbox. This shows how the author has
used existing technology and expanded upon it in order to facilitate the story. Little Brother
can almost be seen as a manual-of-sorts for some types of technology, such as RFID (radio
frequency identification) cloners, cryptography and Bayesian maths, as we are given
detailed insights into these issues. However, I did feel at times that the detailed
explanations of certain technologies drastically slowed the pace of the book and made it a
little tedious, however it ensured that the book could be as technical as it wanted by
informing the reader of things that needed to be known in order to follow the story
completely.

Overall, I found Little Brother to be a very good book, utilising technology brilliantly in order
to create a complex and engaging world that many other books can only dream about. The
character development is also excellent, as we treated to a story of how Marcus has to
overcome his apprehensive tendencies in order to carry out his revenge on the DHS. A line
that really resonated with me from was “the best part of all this is how it made me feel: in
control. My technology was working for me”, as it summarises a good deal of the book in
few words, how Marcus felt in control of everything in his own little world of modified
technology.

Conclusion
Upon the completion of this assignment, I most definitely feel it has allowed me to express
what I have learned through the module, and have successfully achieved the intended
learning outcomes. Each of the six resources I have summarised and reviewed have given
me an insight, one way or another, into different aspects of the course and I will definitely
take away much of the information I have learned as it has given me detailed information on
technologies that play a critical part in my life, and the lives of countless others around the
world. Another thing I have learned from this assignment is to ensure that any resources
used should be backed up by another, as I noticed that many resources contain
discrepancies and/or are significantly complimented by additional independent research, if
even just to obtain a contrasting opinion on a certain subject.

You might also like