Emerging Tech or Trends or Legal Issues in World of Tech in 2008 To 2010 (Pankil Patel-42)

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 57

Project Report

on
Emerging
tech/trends/legal
issues in world of tech
in 2008 to 2010

1
Prepared by:

PANKIL PATEL

(MEB-3, Roll No- 42)

Submitted to:

PAWAN DUGGAL

Content

 ‘Green IT’ – the next burning issue for business

 Unified Communications

 Business Process Modeling

 Metadata Management

 Virtualization 2.0

 Mashup & Composite Apps

 Web Platform & WOA

 Computing Fabric

 Real World Web

2
 Social Software

 JPMorgan Predicts 2008 Will Be “Nothing But Net”

 World Wide Web: Land of Free Stuff

Summary

My report covers all emerging tech/trends/legal issues in world of tech in 2008 to 2010 which are
the following technology which will change the world and also use of IT and technology. Mostly
people are not aware about the new emerging technology but this is a fact of changing life and
we all accept it and also use it.

Green IT. The focus of Green IT that came to the forefront in 2007 will accelerate and expand in
2008. Consider potential regulations and have alternative plans for data center and capacity
growth. Regulations are multiplying and have the potential to seriously constrain companies in
building data centers, as the impact on power grids, carbon emissions from increased use and
other environmental impacts are under scrutiny. Some companies are emphasizing their social
3
responsibility behavior, which might result in vendor preferences and policies that affect IT
decisions. Scheduling decisions for workloads on servers will begin to consider power efficiency
as a key placement attribute.

Unified Communications. Today, 20 percent of the installed base with PBX has migrated to IP
telephony, but more than 80 percent are already doing trials of some form. Gartner analysts
expect the next three years to be the point at which the majority of companies implement this, the
first major change in voice communications since the digital PBX and cellular phone changes in
the 1970s and 1980s.

Business Process Modeling. Top-level process services must be defined jointly by a set of roles
(which include enterprise architects, senior developers, process architects and/or process
analysts). Some of those roles sit in a service oriented architecture center of excellence, some in
a process center of excellence and some in both. The strategic imperative for 2008 is to bring
these groups together. Gartner expects BPM suites to fill a critical role as a compliment to SOA
development.

Metadata Management. Through 2010, organizations implementing both customer data


integration and product integration and product information management will link these master
data management initiatives as part of an overall enterprise information management (EIM)
strategy. Metadata management is a critical part of a company’s information infrastructure. It
enables optimization, abstraction and semantic reconciliation of metadata to support reuse,
consistency, integrity and shareability. Metadata management also extends into SOA projects
with service registries and application development repositories. Metadata also plays a role in
operations management with CMDB initiatives.

Virtualization 2.0. Virtualization technologies can improve IT resource utilization and increase
the flexibility needed to adapt to changing requirements and workloads. However, by
themselves, virtualization technologies are simply enablers that help broader improvements in
infrastructure cost reduction, flexibility and resiliency. With the addition of automation
technologies – with service-level, policy-based active management – resource efficiency can
improve dramatically, flexibility can become automatic based on requirements, and services can

4
be managed holistically, ensuring high levels of resiliency. Virtualization plus service-level,
policy-based automation constitutes an RTI.

Mashup & Composite Apps. By 2010, Web mashups will be the dominant model (80 percent)
for the creation of composite enterprise applications. Mashup technologies will evolve
significantly over the next five years, and application leaders must take this evolution into
account when evaluating the impact of mashups and in formulating an enterprise mashup
strategy.

Web Platform & WOA. Software as a service (SaaS) is becoming a viable option in more
markets and companies must evaluate where service based delivery may provide value in 2008-
2010. Meanwhile Web platforms are emerging which provides service-based access to
infrastructure services, information, applications, and business processes through Web based
“cloud computing” environments. Companies must also look beyond SaaS to examine how Web
platforms will impact their business in 3-5 years.

Computing Fabric. A computing fabric is the evolution of server design beyond the interim
stage, blade servers, that exists today. The next step in this progression is the introduction of
technology to allow several blades to be merged operationally over the fabric, operating as a
larger single system image that is the sum of the components from those blades. The fabric-based
server of the future will treat memory, processors, and I/O cards as components in a pool,
combining and recombining them into particular arrangements to suits the owner’s needs. For
example a large server can be created by combining 32 processors and a number of memory
modules from the pool, operating together over the fabric to appear to an operating system as a
single fixed server.

Real World Web. The term “real world Web” is informal, referring to places where information
from the Web is applied to the particular location, activity or context in the real world. It is
intended to augment the reality that a user faces, not to replace it as in virtual worlds. It is used in
real-time based on the real world situation, not prepared in advance for consumption at specific
times or researched after the events have occurred. For example in navigation, a printed list of
directions from the Web do not react to changes, but a GPS navigation unit provides real-time
directions that react to events and movements; the latter case is akin to the real-world Web of
5
augmented reality. Now is the time to seek out new applications, new revenue streams and
improvements to business process that can come from augmenting the world at the right time,
place or situation.

Social Software. Through 2010, the enterprise Web 2.0 product environment will experience
considerable flux with continued product innovation and new entrants, including start-ups, large
vendors and traditional collaboration vendors. Expect significant consolidation as competitors
strive to deliver robust Web 2.0 offerings to the enterprise. Nevertheless social software
technologies will increasingly be brought into the enterprise to augment traditional collaboration.

“These 10 opportunities should be considered in conjunction with many proven, fullymatured


technologies, as we as others that did not make this list, but can provide value for many
companies,” said Carl Claunch, vice president and distinguished analyst at Gartner.

“For example, real-time enterprises providing advanced devices for a mobile workforce will
consider next-generation smart phones to be a key technology, in addition to the value that this
list might offer.”

‘Green IT’ – the next burning Issue for business

Executive summary

It is becoming widely understood that the way in which we are behaving as a society is
environmentally unsustainable, causing irreparable damage to our planet. Rising energy prices,
together with government-imposed levies on carbon production, are increasingly impacting on
the cost of doing business, making many current business practices economically unsustainable.

It is becoming progressively more important for all businesses to act (and to be seen to act) in an
environmentally responsible manner, both to fulfill their legal and moral obligations, but also to
enhance the brand and to improve corporate image.

6
Companies are competing in an increasingly ‘green’ market, and must avoid the real and growing
financial penalties that are increasingly being levied against carbon production.

IT has a large part to play in all this. With the increasing drive towards centralized mega data
centers alongside the huge growth in power hungry blade technologies in some companies, and
with a shift to an equally power-hungry distributed architecture in others, the IT function of
business is driving an exponential increase in demand for energy, and, along with it, is having to
bear the associated cost increases.

The problem

Rising energy costs will have an impact on all businesses, and all businesses will increasingly be
judged according to their environmental credentials, by legislators, customers and shareholders.
This won’t just affect the obvious, traditionally power-hungry ‘smoke-belching’ manufacturing
and heavy engineering industries, and the power generators. The IT industry is more vulnerable
than most – it has sometimes been a reckless and profligate consumer of energy.

Development and improvements in technology have largely been achieved without regard to
energy consumption.

The impact

Rising energy costs and increasing environmental damage can only become more important
issues, politically and economically. They will continue to drive significant increases in the cost
of living, and will continue to drive up the cost of doing business. This will make it imperative
for businesses to operate as green entities, risking massive and expensive change.

Cost and environmental concern will continue to force us away from the ‘dirtiest’ forms of
energy (coal/oil), though all of the alternatives are problematic. We may find ourselves facing a
greater reliance on gas, which is economically unstable and whose supply is potentially insecure,
or at least unreliable. It may force greater investment in nuclear power, which is unpopular and
expensive, and it may lead to a massive growth of intrusive alternative energy infrastructure –
including huge wind farms, or the equipment needed to exploit tidal energy.

7
Solving the related problems of rising energy costs and environmental damage will be extremely
painful and costly, and those perceived as being responsible will be increasingly expected to
shoulder the biggest burden of the cost and blame. It may even prove impossible to reduce the
growth in carbon emissions sufficiently to avoid environmental catastrophe.

Some believe that the spotlight may increasingly point towards IT as an area to make major
energy savings, and some even predict that IT may even become tomorrow’s 4x4/SUV, or
aviation – the next big target for the environmental lobby, and the next thing to lose public
support/consent.

The solution

A fresh approach to IT and power is now needed, putting power consumption at the fore in all
aspects of IT – from basic hardware design to architectural standards, from bolt-on point
solutions to bottom-up infrastructure build.

IBM has a real appreciation of the issues, thanks to its size, experience and expertise, and can
help its customers to avoid the dozens of ‘wrong ways’ of doing things, by helping to identify the
most appropriate solutions.

There is a real, economic imperative to change arising now, and it is not just a matter of making
gestures simply to improve a company’s environmental credentials.

The cost of power

8
The whole topic of energy consumption is gaining increased prominence in Western Europe as a
consequence of rising energy prices, and as a result of a growing focus on global warming and
the environment.

The company bottom line

Energy prices rose during 2005 for a third consecutive year, driven by war in the Middle East,
‘tight’ capacity, extreme weather and a focus on energy among investors. The price of a barrel of
Brent Crude reached US$50 for the first time (after a 40% increase since 2004), and UK and US
natural gas prices also hit record highs. Global energy supplies were maintained even in the face
of continuing conflict in the Middle East and despite the disruptive effects of the hurricanes that
hit the US Gulf Coast, but concerns as to the security of energy supplies have increased, not least
after Russia interrupted natural gas supplies to the Ukraine.

Energy costs for UK businesses have increased by 57% during the last 12 months, and now form
a really significant element of operational expenses, often greater than IT equipment
depreciation, and sometimes greater than real-estate costs. Energy costs form a growing
proportion of IT costs, which are increasing (despite the headline reductions in some hardware
prices). “For every dollar spent on IT equipment, $3 to $4 is spent on operating it through life,”
according to Andrew Fanara, team leader for the US Environmental Protection Agency’s Energy
Star programmed.

With shrinking reserves and growing demand, there can be little doubt that energy prices will
continue to grow, and inflation is likely to accelerate. The continued lack of sources of
alternative, clean, green and renewable energy (which remains expensive and statistically
insignificant) means that the penalties associated with environmental impact (such as carbon
taxes) will continue to increase.

The environmental bottom line

According to BP’s 2005 ‘Statistical Review of World Energy’, the world still has some 40 years
of oil reserves if demand remains static, though proven reserves are still growing, albeit slowly.
The bulk of reserves are located in the Middle East (61.9%), with 22% in Saudi Arabia, but with

9
significant reserves in Iran (11.5%) and Iraq (9.6%). Russia and Kazakhstan are together
responsible for another 9.5%, leading to real concerns about the long-term security of supply. Oil
consumption increased by just 1.3% in 2005.

The same source predicts 65 years of natural gas reserves, with 26.6% in Russia, 14.9% in Iran
and 14.3% in Qatar. No other country has more than 4% of global reserves, making security of
supply even more of a concern. Gas consumption increased by 2.3% during 2005.

The effect on the environment is potentially disastrous, with rising oil and gas prices now
triggering a real switch to coal – the dirtiest and most polluting energy source. Though European
and Eurasian coal consumption rose by just 0.4%, coal was again the world’s fastest growing
fuel, with the growth in consumption reaching 5% or doubles the 10-year average. Perhaps most
worryingly, coal consumption rose fastest in Asia, especially in China (10.9%), and India (4.8%).
The latter two economies now consume more than twice as much coal as the US (where the
consumption of coal rose by 1.9%), accounting for almost half (47%) of the global total. Growth
in Thailand (12%), Turkey (14%), Pakistan (14.8%) and the Philippines (17.7%) was even more
rapid. Even in the UK, coal has enjoyed a minor renaissance, with the re-opening of the Hatfield
Colliery in South Yorkshire, which had closed in 2004. Coal can thus be seen to be fuelling the
growth of the world’s most dynamic economies. Moreover, with some 155 years of reserves, at
current rates of use, coal will continue to be important for decades to come.

The growing importance of coal will only focus attention on energy consumption among the
public, customers and shareholders, all of whom are becoming increasingly environmentally
aware. Reducing energy consumption will become an environmental imperative, as well as an
economic necessity.

Energy consumption increasingly has a real effect on an organization’s reputation and corporate
image. Though there are sources of clean, Green and renewable energy, these remain expensive
and statistically insignificant, and it is still impractical (if not actually impossible) for a major
energy consumer to limit itself to using renewable energy.

It is widely assumed that a typical computer uses about .65 kilowatts per hour (kWh) in use, or
.35kWh (stand-by) and .03kWh in hibernate mode.

10
Assuming that the computer spends 220 working days with 12 hours in operational mode
(1716kW) and 12 hours in standby mode (924kW), and spends 24 hours in hibernate mode for the
remaining 145 days (104kW), it will consume 2145 kW of electricity.

According to UK government figures, 1kWh produces 0.51kg of carbon dioxide (CO2), and
1,960kWh produces 1 tone of CO2. This makes allowance for the fact that with current nuclear
capacity (which is reducing) some 15% of electricity is generated without producing any CO2.

This means that a single PC in office mode costs an insignificant amount to run (£16.00 per
annum), but generates 1.094 tones of CO2 per annum equivalent to the CO2 produced by a single
passenger flying from London to

Cairo – spread this across a distributed desktop environment of 2,000 PCs and you have an
annual carbon footprint of 2,188 tones of CO2.

A history – and the future – of increasing power consumption

Many of today’s motor cars and car engines are increasingly poorly suited to today’s demand for
economy and fuel efficiency, having been designed when oil prices were low and when
performance, space and comfort were the most important design drivers. Each new car model
since the Model T was therefore designed to out-perform its predecessors. Only now is fuel
economy and environmental ‘friendliness’ is becoming more important than speed and
horsepower.

The situation is similar in the IT industry, which has seen a concentration on processing power
and storage capacity, while power consumption has been ignored. As in the automotive industry,
energy consumption was regarded as being much less important than performance.

As manufacturers competed to create ever-faster processors, smaller and smaller transistors


(running hotter and consuming more electricity) were used to form the basis of each new
generation of processors. Increased operating temperatures added to the consumption of power,
requiring more and more cooling fans.

Modern IT systems provide more computing power per unit of energy (kWh) and thus reduce
energy consumption per unit of computing power. Despite this, they are actually responsible for

11
an overall increase in energy consumption, and for an increase in the cost of energy as a
proportion of IT costs.

This is because users are not simply using the same amount of computing power as before, while
using the new technology to reduce their power consumption (or operating temperatures), nor are
they using technology to leverage savings in energy costs or in CO2 production. Instead, users
are taking and using the increased computing power offered by modern systems.

New software in particular is devouring more and more power every year. Some software
requires almost constant access to the hard drive, draining power much more rapidly than
previous packages did.

The advent of faster, smaller chips has also allowed manufacturers to produce smaller, stackable
and rack able servers allowing greater computing power to be brought to bear (and often shoe-
horned into smaller spaces) but with no reduction in overall energy consumption, and often with
a much greater requirement for cooling.

Despite the trend towards server virtualization and consolidation in some companies, business
demand for IT services is increasing, and many companies are still expanding their data centers,
while the number of servers in such data centers is still increasing annually by about 18%.

While the growth in demand for energy did slow down in 2005 (going from a 4.4% rise to just
2.7%, globally) and though the demand for energy actually fell in the USA, the International
Energy Agency has predicted that the world will need 60% more energy by 2030 than it does
today.

“A typical 10,000-square-foot data centre consumes more electricity than 8,000 60-watt
light bulbs. That represents six to 10 times the power needed to operate a typical office
building at peak demand, according to scientists at Lawrence Berkeley National
Laboratory. Given that most data centers run 24x7x365, the companies that own them
could end up paying millions of dollars this year just to keep their computers turned on.

12
Forrester Research estimates3 that data centers require 0.5 to 1 watt of cooling power for
each watt of server power used, and that a typical x86 server consumes between 30% and
40% of its maximum power when idle.

Back to the data centre – the hot problem

In many companies, there has been a shift away from dedicated data centers, as part of an
attempt to provide all IT requirements by using smaller boxes within the office environment.
Many have found this solution too expensive, experiencing a higher net spend on staff as well as
with higher support costs. Energy consumption of distributed IT environments is difficult to
audit, but some have also noted a progressive increase in power consumption with the move
from centralized to decentralized, then to distributed architecture, and finally to mobility-based
computing.

Even where distributed computing remains dominant, the problems of escalating energy prices
and environmental concerns are present, albeit at a lower order of magnitude than in the data
centre environment, and even though the problems are rather more diffuse and more difficult to
solve.

Some analysts believe that there is already a trend away from distributed computing back to the
data centre, with consolidation and centralization on the rise again. Within a data centre/server
environment, technological improvement is driving requirements for greater energy into the
building, for increased floor area and for increased cooling capacity.

This may be counter-intuitive, since the emergence of blade servers superficially promised to
allow the more efficient use of data centre floor space, by packing more high-performance
servers into a single rack.

However, this increase in computing power and server numbers for a given floor area multiplies
cooling problems, since air is an inefficient media for cooling computers and empty space alone
is insufficient to give adequate cooling. Air conditioning and other cooling techniques are
required to keep temperatures in check. A typical 1980s server could be cooled quite easily, but
though a modern server takes up much less floor space, it is more difficult to cool, and requires

13
more space around it. Though it will require less power per unit of computing power, its overall
energy requirement will be considerably higher, and the need for improved cooling will further
increase energy requirements – and environmental impact, of course. Analysts at Gartner
recently suggested that by the end of 2008, 50% of the data centers would not have enough
power to meet the power and cooling requirements of the new equipment used in high-density
server environments.

The new systems are more compact and of higher density, and can call for more localized power
and cooling than will typically be found in an existing data centre environment. A blade server
system set up in a single rack, can easily weigh more than a tone, and can in theory call for more
than 30kW of power – more than 10 times what would have been required a few years ago.

According to Sun Microsystems engineers, a typical rack of servers installed in data centers just
two years ago might have consumed a modest 2kW of power while producing 40 watts of heat
per square foot. Newer, high-density racks, expected to be in use by the end of the decade, could
easily consume as much as 25kW and give off as much as 500 watts of heat per square foot. The
energy consumed by fans, pumps and other cooling components already accounts for some 60-
70% of the total energy consumption in the data centre, and Gartner predicts that energy costs
will become the second highest cost in 70% of the world’s data centers by 2009, trailing
staff/personnel costs, but well ahead of the cost of the IT hardware.

It is now believed that in most data centers, particularly those located in single-story industrial-
type buildings, electrical costs are already more than two to three times greater than real-estate
costs, and many existing data centre buildings may be physically incapable of providing the
higher levels of power and cooling that are now required.

Because IT equipment is usually depreciated every two to three years, investment in new
hardware is relatively easy, whereas new data centre equipment (including air conditioning,
universal power supplies and generators) are more usually depreciated over 20 years, making
new investment more difficult. Investing in new buildings may be more even more problematic.
It is thus difficult and costly to build your way out of power consumption and heat problems.

14
The increasing drive toward server consolidation in an effort to improve operating costs and
operational efficiency is further aggravating the problems of increasing energy consumption, and
increased heat generation. Thus, data centre managers must focus on the electrical and cooling
issue as never before.

There is cheap, quick-fix, ‘point’ solutions that provide ‘strap-on’ cooling by retrofitting blowers
and/or water-cooling systems. Installing water jackets on the server racks allows one to build a
much smaller, denser and more efficient data centre. But although liquid cooling is more efficient
than air-conditioning, it is still a short-term, stop-gap answer.

Much greater efficiencies and greater cost savings can be leveraged by addressing the underlying
problem and by using longer-term solutions. This is likely to entail redesigning and
reconfiguring the data centre, however, which obviously requires more long-term investment and
a fresh approach to IT, with power consumption at front of mind.

An IBM pSeries* 575 weighs as much as a family car (1,367kg) and consumes 32kW per
hour – enough to power a 2,500 square foot home!

Estimates suggest that data centers waste 875,000,000kWh of energy per year – this is
equivalent to 436,000,000kg of CO2 emissions annually.

Strategies for change

The whole purpose of IT is to make businesses more productive and efficient, and to save money.
Businesses are competitive bodies, used to having to ‘do more with less’ in order to remain
competitive. They will have to learn to use less electricity in just the same way, using green
(sustainable) computing to save money. This will demand major changes in IT user behaviors
and policies.

As energy and infrastructure costs continue to increase exponentially, and as environmental


considerations become more prevalent, there is a real need for a power-based IT optimization
strategy, bringing power right to the fore of IT policy, thereby impacting the end-toned
architecture, hardware and software, and on all of the processes undertaken day-to-day to support
a company’s workflow.
15
This could force the adoption of new infrastructure, and will increasingly inform decision
making when new platforms are procured, or when decisions are made about IT strategies –
whether to centralize or whether to adopt a more distributed architecture and so on. Other
companies will have to take more modest steps, simply making sure that desktop PCs, monitors
and printers are turned off at night, and/or using more effective power-saving modes on unused
equipment. Others will opt to use more energy-efficient components, such as LCDs rather than
CRT monitors when buying new hardware.

New dual-core processors are faster than traditional chips and yet use less energy, and the latest
generation of dual-core processors (exemplified by Intel’s** new ‘Woodcrest’) promise to
consume about one third less power than their predecessors while offering up to 80% better
performance.

Other IT users may need to investigate the use of DC power. Most energy suppliers provide AC
power because it is easier to transport over long distances, although most PCs and servers run on
DC, so that the AC current from the utility has to be converted to DC before it reaches the
hardware, with inevitable losses of energy in conversion.

Some companies may benefit from moving away from distributed computing based on individual
desktop PCs to small, thin client server architecture. It has been suggested that a 10-user system
could save about 3,200kWh per year in direct electricity costs (while further energy savings,
equivalent to about 11 tones of CO2 per year, would be saved in manufacturing costs). The total
production and operating cost savings over the three-year life span of a 10-user system would be
more than 33 tones.

In an existing server environment, there are significant cost savings associated with any
reductions in cooling requirements, and keeping server rooms and computer workspaces at the
right temperature is critical.

Virtualization and server consolidation can allow users to ‘do more with less’, allowing one large
server to replace several smaller machines. This can reduce the power required and the overall
heat produced. By reducing the number of servers in use, users can simplify their IT
infrastructure, and reduce the power and cooling requirements. When Dayton, Ohio overhauled

16
its IT infrastructure, replacing a network of 80 archaic terminals and numerous ad hoc PCs with
thin clients for 60% of the staff and PCs for the rest, the city saw a corresponding drop in energy
used. The switch saved the city US$700,000 annually from reduced data and software
administration expenses, and especially from lower client maintenance costs, with a US$60,000-
$90,000 reduction in electricity costs. There is also a corresponding reduction in carbon
footprint.

Fortunately, business is getting outside support as it struggles towards greener computing. The
US Environmental Protection Agency’s Energy Star programmed is already promoting more
energy-efficient IT infrastructures and policies, while IBM, Hewlett-Packard, Sun Microsystems
and AMD have joined forces to launch the Green Grid environmental lobby, aimed at reducing
energy consumption at computer data centers by encouraging and improving power-saving
measures.

17
Unified Communications

Unified Communications (UC) is a commonly used term for the integration of disparate
communications systems, media, devices and applications. This potentially includes the
integration of fixed and mobile voice, e-mail, instant messaging, desktop and advanced business
applications, Internet Protocol

(IP)-PBX, voice over IP (VoIP), presence, voice-mail, fax, audio video and web conferencing,
unified messaging, unified voicemail, and white boarding into a single environment offering the
user a more complete but simpler experience.

18
Gartner states "The largest single value of UC is its ability to reduce "human latency" in business
processes."

Definition

One of the most significant innovations of the past several years in communications technology,
unified communications is more simply and generally defined as the ability to redirect and
deliver, in real-time, e-mail, video, voice or text communication from a variety of systems to the
device nearest to the intended recipient. Deficient citation for those who have trouble
differentiating between Unified Communications and IP Telephony, UC is essentially a
horizontal solution set within IP Telephony that is focused on simplifying the user's
communication experience.

History

The history of Unified Communications is tied to the evolution of the supporting technology.
Unified Communications relies on the Internet Protocol (IP), which also supports e-mail and the
World-Wide Web. Previously, telephony used a different protocol, not integrated with data
communications, called TDM (Time Division Multiplex). But telephony began evolving toward
employing software and servers, and toward using IP in order to function in a whole new way.
Voice-over-internet-protocol, or voice-over-IP, (VOIP), provides digital telephone service over IP
networks, including the Internet, instead of traditional switched telephone networks. Deficient
citation with this shift in mode of delivery, which took place over the course of the past ten years
or so, unified communications, with all its real-time capabilities and uses became possible.

The difference between unified communications and unified messaging

Unified communications is sometimes confused with unified messaging, but it is distinct.


Unified communications refers to a real-time delivery of communications based on the preferred
method and location of the recipient; unified messaging systems culls messages from several
sources (such as email, voice mail and faxes), but holds those messages for retrieval at a later
time.

19
Business Process Modeling

20
The term process model is used in different contexts. For example, in Business process
modeling the enterprise process model is often referred to as the business process model. Process
models are core concepts in the discipline of Process Engineering.

{Abstraction level for processes}

Process models are processes of the same nature that are classified together into a model. Thus, a
process model is a description of a process at the type level. Since the process model is at the
type level, a process is an instantiation of it. The same process model is used repeatedly for the
development of many applications and thus, has many instantiations. One possible use of a
process model is to prescribe how things must/should/could be done in contrast to the process
itself which is really what happens. A process model is roughly an anticipation of what the
process will look like. What the process shall be will be determined during actual system
development.

Process model goals

The goals of a process model are:

 To be Descriptive

o Track what actually happens during a process.

o Takes the point of view of an external observer who looks at the way a process
has been performed and determines the improvements that have to be made to
make it perform more effectively or efficiently.

 Prescriptive

o Defines the desired processes and how they should/could/might be performed.

21
o Lays down rules, guidelines, and behavior patterns which, if followed, would lead
to the desired process performance. They can range from strict enforcement to
flexible guidance.

 Explanatory

o Provides explanations about the rationale of processes.

o Explore and evaluate the several possible courses of action based on rational
arguments.

o Establish an explicit link between processes and the requirements that the model
needs to fulfill.

o Pre-defines points at which data can be extracted for reporting purposes.

Purpose

From a theoretical point of view, the Meta-Process Modeling explains the key concepts needed
to describe what happens in the development process, on what, when it happens, and why. From
an operational point of view, the Meta-Process Modeling is aimed at providing guidance for
method engineers and application developers.”

The activity of modeling a business process usually predicates a need to change processes or
identify issues to be corrected. This transformation may or may not require IT involvement,
although that is a common driver for the need to model a business process. Change management
programmers are desired to put the processes into practice. With advances in technology from
larger platform vendors, the vision of business process models (BPM) becoming fully executable
(and capable of round-trip engineering) is coming closer to reality every day. Supporting
technologies include Unified Modeling Language (UML), model-driven architecture, and
service-oriented architecture.

Process Modeling addresses the process aspects an Enterprise Business Architecture, leading to
an all encompassing Enterprise Architecture. The relationships of a business processes in the

22
context of the rest of the enterprise systems, data, organizational structure, strategies, etc. create
greater capabilities in analyzing and planning a change. One real world example is in corporate
mergers and acquisitions; understanding the processes in both companies in detail, allowing
management to identify redundancies resulting in a smoother merger.

Process Modeling has always been a key aspect of business process reengineering, and
continuous improvement approaches seen in Six sigma.

Classification of process models

Classification by coverage

There are four types of coverage where the term process model has been defined differently:

 Activity-oriented: related set of activities conducted for the specific purpose of product
definition; a set of partially ordered steps intended to reach a goal.

 Product-oriented: series of activities that cause successive product transformations to


reach the desired product.

 Decision-oriented: set of related decisions conducted for the specific purpose of product
definition.

 Context-oriented: sequence of contexts causing successive product transformations under


the influence of a decision taken in a context.

Classification by alignment

Processes can be of different kinds. These definitions “correspond to the various ways in which a
process can be modeled”.

 Strategic processes

o investigate alternative ways of doing a thing and eventually produce a plan for
doing it

23
o are often creative and require human co-operation; thus, alternative generation
and selection from an alternative are very critical activities

 Tactical processes

o help in the achievement of a plan

o are more concerned with the tactics to be adopted for actual plan achievement
than with the development of a plan of achievement

 Implementation processes

o are the lowest level processes

o are directly concerned with the details of the what and how of plan
implementation

Classification by granularity

Granularity refers to the detail level of the process model and affects the kind of guidance,
explanation and trace that can be provided. High granularity limits these to a rather coarse level
of detail whereas fine granularity provides more detailed capability. The nature of granularity
needed is dependent on the situation at hand.

Project manager, customer representatives, the general, top-level, or middle management require
rather large-grained process description as they want to gain an overview over time, budget, and
resource planning for their decisions. In contrast, software engineers, users, testers, analysts, or
software system architects will prefer a fine-grained process model for the details of the model
deliver them with instructions and important execution dependencies such as the dependencies
between people.

While notations for fine-grained models exist, most traditional process models are large-grained
descriptions. Process models should, ideally, provide a wide range of granularity. (e.g. Process
Weaver)

24
Classification by flexibility

{Flexibility of Method construction approaches}

It was found that while process models were prescriptive, in actual practice departures from the
prescription can occur. Thus, frameworks for adopting methods evolved so that systems
development methods match specific organizational situations and thereby improve their
usefulness. The development of such frameworks is also called Situational Method Engineering.

Method construction approaches can be organized in a spectrum ranging from 'low' flexibility, to
'high'. Lying at the 'low' end of this spectrum are rigid methods, whereas at the 'high' end there
are modular method construction. Rigid methods are completely pre-defined and leave little
scope for adapting them to the situation at hand. On the other hand, modular methods can be
modified and augmented to fit a given situation. Selecting rigid methods allows each project to
choose its method from a panel of rigid, pre-defined methods, whereas selecting a path within a
method consists of choosing the appropriate path for the situation at hand. Finally, selecting and
tuning a method allows each project to select methods from different approaches and tune them
to the project's needs.”

Business process modeling

25
{Zachman Framework Perspectives of Process Focus}

Business Process Modeling (also known as Business Process Discovery, BPD) is the activity of
representing both the current ("as is") and future ("to be") processes of an enterprise, so that the
current process may be analyzed and improved. BPM is typically performed by business analysts
and managers who are seeking to improve process efficiency and quality. The process
improvements identified by BPM may or may not require IT involvement, although that is a
common driver for the need to model a business process, by creating a process master.

Change management programs are typically involved to put the improved business processes
into practice. With advances in technology from large platform vendors, the vision of BPM
models becoming fully executable (and capable of simulations and round-trip engineering) is
coming closer to reality every day.

Business Process Modeling plays an important role in the business process management (BPM)
discipline. Since both Business Process Modeling and Business Process Management share the
same acronym (BPM), these activities are sometimes confused with each other.

Modeling language standards that are used for BPM include Business Process Modeling
Notation (BPMN), Business Process Execution Language (BPEL), Unified Modeling Language
(UML), Object Process Methodology (OPM), and Web Services Choreography Description
Language (WS-CDL). Other technologies related to business process modeling include model-
driven architecture and service-oriented architecture.
26
BPM addresses the process aspects of an Enterprise Business Architecture, leading to an all
encompassing Enterprise Architecture. The relationships of a business processes in the context of
the rest of the enterprise systems (e.g., data architecture, organizational structure, strategies, etc.)
create greater capabilities when analyzing and planning enterprise changes. For example, during
a corporate merger it is important to understand the processes of both companies in detail so that
management can correctly and efficiently identify and eliminate redundancies in operations.
Business Process Modeling has always been a key aspect of business process reengineering
(BPR) and continuous improvement approaches, such as Six Sigma. BPM tools such as K2
[black pearl], Axway, Lombardi, Holosofx, Homocentric Modeler and TIBCO are used in order
to represent a business process, to run a simulation of the process and for communication
purposes.

Techniques

There are different styles for representing processes: "scripts," "programs," and "hypertext."
Process scripts are interactively used by humans as against process programs which are enacted
by a machine. They support non determinism whereas process programs can, at best, support
process deviation under pre-defined constraints. The hypertext style of process representation is a
network of links between the different aspects of a process, such as product parts, decisions,
arguments, issues, etc. Scripts and programs are two styles which may be applicable to
prescriptive purposes whereas hypertext is well suited to descriptive and explanatory purposes.
Strict enforcement of the prescriptive purpose can clearly be represented in process programs
whereas flexible guidance requires the process model to be represented in process scripts.
Descriptive and explanatory purposes require the establishment of relationships between
different elements of a process trace. These relationships are well articulated as hypertext links.

Process Representation style

Perspective Scripts Programs Hypertext

27
Usage interactively used byenacted by a machine network of links between the
humans different aspects of a process, such
as product parts, decisions,
Character support nonsupport, at best, process
arguments, issues, etc.
determinism deviation under pre-
defined constraints

Applicability applicable toapplicable to prescriptiveapplicable to descriptive and


prescriptive purposespurposes (strictexplanatory purposes
(flexible guidance) enforcement)

Traditionally, informal notations such as natural languages or diagrams with informal semantics
have been used as process models underlying information systems. In software engineering,
more formal process models have been used.

Metadata Management
Meta-data Management involves storing information about other information. With different
types of media being used references to the location of the data can allow management of diverse
repositories. URLs, images, video etc. may be referenced from a triples table of object, attribute
and value.

This can be generated by Java from a mySQL table using an HTML template. With different
knowledge domains the boundaries of the Meta data for each must be managed since any general
ontology is not of use to experts in one field whose language is knowledge domain specific.

28
Virtualization 2.0
Virtualization technologies can improve IT resource utilization and increase the flexibility
needed to adapt to changing requirements and workloads. However, by themselves, virtualization
technologies are simply enablers that help broader improvements in infrastructure cost reduction,
flexibility and resiliency.

With the addition of automation technologies - with service-level, policy-based active


management - resource efficiency can improve dramatically, flexibility can become automatic
based on requirements, and services can be managed holistically, ensuring high levels of
resiliency. Virtualization plus service-level, policy-based automation constitutes an RTI.

29
NEW YORK CITY—in the coming 18 months, virtualization, already a mainstream tool for IT
administrators looking to consolidate applications within a data center, will continue to be
adopted within enterprises as companies expand the technology to plan for business continuity
and create high-availability servers.

In addition, virtualization will become more available in desktops and within mobile devices,
such as cell phones. This flexibility will allow businesses to reduce costs and increase security at
the PC level.

These were some of the main topics of discussion at the IDC Virtualization Forum here Feb. 6,
which attempted to look at the future of virtualization and where the technology is going in the
next 18 to 24 months.

While virtualization does have benefits, serious issues concerning security—both at the level of
the hypervisor and within the network—still remain. Enterprises and their IT departments will
also have to contend with how to allocate costs within a virtual environment and deal with issues
concerning the licensing of proprietary products within a virtual world, such as an Oracle
database or a Microsoft operating system.

Since it was first introduced, virtualization—the ability to run multiple applications and
operating systems on a single server—was mainly seen as a way to consolidate workloads within
a data center into fewer and fewer servers.

30
Up until just a few years ago, most of virtualization had been used to move critical workloads
such as applications and operating systems from legacy systems onto new servers with better
processing power.

The technology has also become one of several tools used to reduce the cost of cooling and
power within a data center.

In the next 18 months, virtualization will enter a "2.0" phase, said John Humphreys, IDC's
program director for enterprise virtualization. More and more, virtualization will be a sought-
after tool for companies looking to plan for business continuity and disaster recovery.

Indeed, about 50 percent of virtualization will involve HADR (high-availability disaster


recovery), Humphreys said.

Within the data center, virtualization will enable high availability within servers to avoid
downtime and allow servers to switch workloads in case of system failure.

In terms of desktop virtualization, Humphreys said several IT departments have started to


experiment with desktop virtualization, which could lead the way to more use of thin client PCs
with businesses.

At the show, participants heard from two IT administrators who used virtualization with success
in the data center and are now beginning to experiment with desktop virtualization.

"Right now, you see some early end users who see the benefits in terms of reduced cost, better
availability and increased security," Humphreys said.

There is also a push within the industry to use virtualization to better improve mobility, both with
notebooks that use virtualization software and with other mobile devices, like smart phones.

"Right now we see mobility as a lot of untapped potential," said Parag Patel, the senior director
of storage alliances for VMware, a prominent virtualization software vendor.

31
VMware customers, Patel said, are looking for "infrastructure wide" virtualization tools that not
only encompass servers, but PCs, storage and networking devices, and provide easy and
centralized manageability.

"Customers are looking for virtualization across many platforms," Patel said.

With as much potential as virtualization has—IDC predicts that the number of installed servers
worldwide will hit 45 million by 2010, which offers a massive opportunity for consolidation—
there is one issue that has not been adequately addressed: security.

Vendors and their customers are looking for protection against vulnerabilities within the
hypervisor, when workloads are moved either from physical machines to virtual ones, or from
one virtual machine to another. Just before the forum started, IBM announced a new security
tool called Secure Hypervisor that addresses some of those issues.

There is also a need to secure networks within enterprises that are now moving physical loads to
virtual loads and back again.

Still, the potential of virtualization, at this point, may outweigh some of the risks. Humphreys
said that in a standard enterprise, the administrator-to-server ratio could be 20 to one. In a
virtualized environment, there is the potential for one administrator to monitor 200 servers.

"This is allowing companies to free up capital or free up the number of people and reallocate
them," Humphreys said. "This could have a huge impact on a company."

32
Mashup & Composite Apps
The first is from Gregor Hohpe, a software architect with Google and co-author of Enterprise
Integration Patterns. If you want to get into the nitty gritty of Web services and SOA, Hohpe is
your guy. He’s got an easy-going writing style, but he explores the how-to of the technology,
rather than the more strategic overview.

This week, he wrote about his experience at Mashup Camp in Silicon Valley. He offers an
insider’s view into mashups, noting that there’s some unresolved personality conflict in the
mashup “community” over whether it wants to be a bit anti-establishment or apply for venture
capital funding.
33
One picture that emerged from Mashup Camp is that mashups are definitely being used for
integration. Workers from IBM and Snap Logic demonstrated mashup data processing tools that
Hohpe reports shared similarities with early EAI tools.

Hohpe participated in a discussion group on enterprise mashups. The group determined there’s
not a clear-cut difference between mashups and composite apps, so they compiled a list to help
distinguish the two. I know I’ve either read or been told recently that composite apps and
mashups are the same thing, so I was particularly interested in how the group managed to define
each.

Here’s how the group separated mashups from composite applications, taken straight from
Hohpe’s article:

• Mashups: REST/XML, ad-hoc, bottom-up, easy to change, low expectations, built


by user

• Composite Apps: SOA/WS-*, planned, top-down, more static, (too) high


expectations, built by IT

(We’ll see if it sticks. What usually happens is about half the people who follow this sort of thing
adopt the first definition as the only definition, but others redefine the terms and the definition
evolves. That results in endless pedantic discussions about who’s right and - usually - people
calling me out for being a clueless journalist when I don’t use their preferred definition. But I
digress.)

There’s a lot more to the post, including a glimpse of what vendors and developers are doing
with mashups, Hohpe’s own mashup tutorials and experiments.

If mashups aren’t your thing, the second item is this article about an intriguing SOA-related
announcement.

Micro Focus is now offering SOA Express, a product that will take existing code — say, code in
Cobol — and convert it into services.

34
It’s important to note that, despite the name, this is not a boxed solution for SOA. Instead, it’s
designed to let development teams focus on creating new services instead of converting old code.

The senior director for product management at Micro Focus is quoted as saying that in one case,
SOA Express reduced the conversion time from six weeks to two hours.

Web Platform & WHOA


Software as a service (SaaS) is becoming a viable option in more markets and companies must
evaluate where service based delivery may provide value in 2008-2010.

Meanwhile Web platforms are emerging which provide service-based access to infrastructure
services, information, applications, and business processes through Web based "cloud
computing" environments.

Companies must also look beyond SaaS to examine how Web platforms will impact their
business in three-five years.

35
WHOA Framework stands for a collection of utility classes trying to cover most aspects of
modern application development practices as well as for a set of commonly used program
patterns and behaviours. It attempts to create a low-level tier above actually used application
platform providing unified API for any higher tier of the architecture design. It's the cumberstone
for concretely applied applications/frameworks developed upon it. Such developments are called
services hence the title WHOA Framework - Family of Services.

Vivien Application Server is one of the "services" of the WHOA Framework. In reality it is an
application server designed and built using the framework making another abstract tier above
original classes for most cases (e.g. streaming technology). As an application server, it provides
commonly needed functionality such as object serialization/storage and communication,
seamless work with streams including web services, files, databases, memory and low-level
sockets. It is specialized on web application development mostly, creating new design pattern
mechanisms and control.

Vivien AS encapsulates n-tier architecture, strictly dividing separate levels of access to actual
code providing better fine-tune control over application development rarely seen in other
products.

36
Computing Fabric
A computing fabric is the evolution of server design beyond the interim stage, blade servers, that
exists today.

37
The next step in this progression is the introduction of technology to allow several blades to be
merged operationally over the fabric, operating as a larger single system image that is the sum of
the components from those blades.

The fabric-based server of the future will treat memory, processors, and I/O cards as components
in a pool, combining and recombining them into particular arrangements to suits the owner's
needs.

For example a large server can be created by combining 32 processors and a number of memory
modules from the pool, operating together over the fabric to appear to an operating system as a
single fixed server.

The Grid and Computing Fabrics

All in all, a Computing Fabric can be thought of as Grid architecture applied across a variety of
levels of scale (from a network on a chip to global networks) and offering greater configurability.

38
Computing
Grids Clusters RapidIO
Scale Fabrics

Global Network xxx xxx

Enterprise Network LAN xxx xxx xxx

System xxx xxx xxx

Subsystem xxx xxx

Processor,
Network-on-Chip, xxx
System-on-Chips
Functional Unit xxx

Grid Gotchas

Erick Von Schweber's inline comments on Peter Coffee's excellent analysis of The Paradox of
Grid Computing.

Software Services Grid Workshop

A collection of groups including the Object Management Group, World Wide Web Consortium,
and the Global Grid Forum recently held the "Software Services Grid Workshop" in Boston. This
event was organized by Erick Von Schweber and Bob Marcus.

The Global Grid Forum 2001

"Towards a Model Driven Semantic Web"

Erick's presentation (in his position as CTO of Cacheon Inc.) to the Global Grid Forum
summarizing and extending the findings of the Software Services Grid Workshop.

Distinguishing between The Grid and Computing Fabrics

(i) a Computing Fabric denotes an architectural pattern applicable at multiple levels of scale,
from global networks to networks on a chip (which are beginning to appear) whereas the Grid is
an architecture specifically applied to campus through global networks (a consequence is that a
Computing Fabric may be denser than a Grid as Rob noted),

(ii) a Computing Fabric admits to irregularity, where some regions of connected nodes may be
very closely coupled and present a strong single system image while other regions may be very

39
loosely coupled letting distribution concerns show through (the Grid by comparison is uniform),
and finally

(iii) a Computing Fabric is dynamically reconfigurable and fluid whereas a Grid may be rigid or
require periods of downtime for repair, upgrade, or reconfiguration.

All in all, a Computing Fabric can be thought of as Grid architecture applied across a
variety of levels of scale and offering greater configurability.

Links between the Grid and Computing Fabrics

It's interesting that, not a whole long time after the 1998 cover story, SGI found their
collaboration with Microsoft to be stalling, and began seeding Linux with several of their
proprietary "fabric-like" technologies. Globus, Legion, and related open source efforts have
certainly moved ahead with a speed that makes commercial efforts seem tortoise-like by
comparison.

The links between the Grid and Computing Fabrics go back to the beginning - the Grid Forum
(now Global Grid Forum) began with a Birds of a Feather session at SC98 immediately
preceding our Computing Fabrics BOF in the very same room.

Real World Web


The term "real world Web" is informal, referring to places where information from the Web is
applied to the particular location, activity or context in the real world.

It is intended to augment the reality that a user faces, not to replace it as in virtual worlds.

It is used in real-time based on the real world situation, not prepared in advance for consumption
at specific times or researched after the events have occurred.

40
For example in navigation, a printed list of directions from the Web do not react to changes, but a
GPS navigation unit provides real-time directions that react to events and movements; the latter
case is akin to the real-world Web of augmented reality.

Now is the time to seek out new applications, new revenue streams and improvements to
business process that can come from augmenting the world at the right time, place or situation.

Social networks, Web 2.0 and the narrowing gap between the Internet and the real world top the
list of technologies to watch out for according to research heavyweight Gartner. The company
has just released its “2006 Emerging Technologies Hype Cycle” in which it assesses 36 key
technologies over the coming ten years.

The three key areas to watch our for, according to the report, are Web 2.0, the Real World Web,
and Applications Architecture.

Web 2.0 represents a broad collection of recent trends in Internet technologies and business
models, in particular elements such as user-created content and collaboration that draws often
disparate Internet-based tools together to create new elements such as in mashups.

Within Web 2.0, Social Network Analysis (SNA) is rated by Gartner as high impact and capable
of reaching maturity in less than two years. SNA is the use of information and knowledge from
many people and their personal networks. It involves collecting massive amounts of data from
multiple sources, analysing the data to identify relationships and mining it for new information.

Similarly, Ajax, or Asynchronous JavaScript and XML, is also rated as high impact and capable
of reaching maturity in less than two years. Ajax is a collection of techniques and technologies

41
including Javascript and XML that can be used by developers to build better and more interactive
applications for the Web.

“A narrow-scope use of Ajax can have a limited impact in terms of making a difficult-to-use Web
application somewhat less difficult. However,” Gartner said, “even this limited impact is worth
it, and users will appreciate incremental improvements in the usability of applications. High
levels of impact and business value can only be achieved when the development process
encompasses innovations in usability and reliance on complementary server-side processing (as
is done in Google Maps).”

Collective intelligence, rated as transformational by Gartner, is expected to reach mainstream


adoption in five to ten years. Collective intelligence is an approach to producing intellectual
content (such as code, documents, indexing and decisions) that results from individuals working
together with no centralised authority. This is seen as a more cost-efficient way of producing
content, metadata, software and certain services.

Mashups are rated as moderate by Gartner and is expected to hit mainstream adoption in less
than two years. A “mashup” is a lightweight integration of multi-sourced applications or content
into a single new product. Because mashups leverage data and services from public Web sites
and Web applications, they’re lightweight in implementation and built with a minimal amount of
code. But, said Gartner, while they are often quick and cheap to build they do also rely on
external sources which makes them vulnerable to failure in any one of those sources.

The second key area Gartner identifies is the Real World Web, a notion of real world objects that
will contain local processing and networking abilities. With the falling size and cost of
microprocessors they will also be able to interact with their surroundings through sensing and
networking capabilities, eliminating the boundaries between the Web and the real world.

Within this group Gartner lists location-aware technologies and applications as key
developments. These include the use of GPS (global positioning system), assisted GPS (A-GPS),
Enhanced Observed Time Difference (EOTD), enhanced GPS (E-GPS), and other technologies
in the cellular network and handset to locate a mobile user.

42
Sensor Mesh Networks, which are are ad hoc networks formed by dynamic meshes of peer
nodes, each of which includes simple networking, computing and sensing capabilities., are also
high on Gartner’s list of trends.

The third area to take note of according to Gartner is Applications Architecture which includes
the emerging trends of event-driven architecture, model-driven architecture and the semantic
web.

One of the key features highlighted in the 2006 Hype Cycle, said Gartner’s Jackie Fenn, is the
growing consumerisation of IT. “Many of the Web 2.0 phenomenon have already reshaped the
Web in the consumer world”, said Fenn. “Companies need to establish how to incorporate
consumer technologies in a secure and effective manner.”

“Be selectively aggressive — identify which technologies could benefit your business, and
evaluate them earlier in the Hype Cycle”, said Fenn. “For technologies that will have a lower
impact on your business, let others learn the difficult lessons, and adopt the technologies when
they are more mature.”

Social Software
Through 2010, the enterprise Web 2.0 product environment will experience considerable flux
with continued product innovation and new entrants, including start-ups, large vendors and
traditional collaboration vendors.

Expect significant consolidation as competitors strive to deliver robust Web 2.0 offerings to the
enterprise. Nevertheless social software technologies will increasingly be brought into the
enterprise to augment traditional collaboration.

43
Description

SocialSoftware is a label for software that supports group interaction, including

 WebLogs,

 Wikis,

 MultiUserDungeons,

 MOOs (e.g. LambdaMoo),

 InstantMessaging,

 InternetRelayChat, and other communications systems that host many-to-many


interactions.

 [CollaborativeEditor]

 Collaborative Filtering Technology, like Amazon's recommendation software and


EbayDotCom.

 SocialBookmarking, SocialAnnotation, SocialCollaboration (DiiGo, TrailFire,


WikAlong)

Loosely speaking, it is an attempt to distill the commonality between OnlineCommunities,


Computer-Supported Collaborative Work, and newer classes of software like http://Meetup.com
44
(support for real world gatherings), http://UncleRoyAllAroundYou.com (a game that bridges
online and offline social space), and http://Bass-Station.net/ (a Wifi-enabled boombox that
creates emergent playlists.)

Unsurprisingly, many people have differing opinions about what a good definition is. The given
answers range from simple to complex. Starting from "social software enables human-human
communication" to Shirky's axiomatic approach:

1. Social software treats triads of people differently than pairs.

2. Social software treats groups as first-class objects in the system.

Some refined the first axiom, demanding that "social" implies three or more people, despite the
resulting loss of PeerToPeer from the definition. Some have regurgitated ReedsLaw as definitive,
stating that social software gets better with more people, benefiting from network effects, much
like eBay benefits from a large audience. Others have argued the PostWELL opinion that in fact
many things do not improve with exponential growth, like UseNet and Lotus Notes claimed and
failed to do, and Mailing Lists that emphatically do not.

However, many have realized that in fact these definitions aren't the right answer, weighting the
"software" part more than the "social." MattJones observed that:

 Usenet and groupware apps were designed to scale from a technical and business
point of view, not from a social point of view. That's why they sucked, because
they didn't look at how humans work on social scale. That's what's new now I
think, is that we're looking much more to the real world being helped by software
than software simulating a perfect system that we adapt to.

And Meg Pickard [resurrected] that out-of-fashion word, community:

45
 [GBlogs] was a way of making it simpler to contact and identify each other. The
community - all the conversations, the portals and the gizmos grew organically
from the community - not the other way around.

 In 2000, the mailing list started because a blogger from the Netherlands was
coming over during the summer . . . Rather than firing mails all over the shop,
thirteen people set up a mailing list, and . . . From there, it grew. The portal was
created around the community, rather than the other way around. At no point did
anyone sit down and decide to create a community. The community was already
there.

This gets closer to a meaningful definition. For all time people have had the same discussion
about the latest technology or process that stitched people together in a new way. It's not
essentially new that there is technology bringing people together. Consider a world before a road
network was created, and then consider the world after the road network was well established.
The world must have changed a lot.

The Road Network formed in most places at that time by simply smoothing over the traditional
paths people had been travelling already, say by cutting down the vegetation to widen the path,
and then smoothing the ground underneath. The analogy of path building has been taken to the
Internet already a la PathsInHypermedia. Here is another way of spinning it.

For the past several thousand years, despite great turmoil, human nature has not changed much.
The human desire to socialize continues to drive much of everything on this planet, including
technological achievement. For example, the road network was driven by a need for groups of
humans to be in closer contact with each other. This gives a mundane definition for social
software, but maybe the most accurate. Social software is simply software that humans create to
ease contacting each other. Importantly, the software doesn't control the connection, just like the
road network doesn't control how or why merchants in different towns trade with each other. The
road just facilitates the connection, but it is smaller than the social relationship. It's this basic
ontology that allows MeetUp?

46
Now some may respond that we don't all have a say in how this software works on us, like
Amazon's software. True, but we long ago chose to give up the direct control of such things to a
corporate economy, but in a way we control that too by allowing such things to continue. It's still
very much the case that such things only exist because we drive them to exist. Perhaps though
it's worth separating the cases out. Say on one hand we have communication networks we've
built to be closer to each other and on the other hand we have software to optimize those
networks. In the case of Amazon, the software optimizes the "path" between customer and online
retailer. In the case of mapping (cf. AtlasOfCyberspace) and InformationVisualization software,
the software optimizes people's orientation.

Simply put, a definition heavily weighted on the human side with only a light touch from the
technical side has the most mojo, as the fundamental imperative of this software comes from
human nature, not from the technical sphere.

A different approach to the definition of Social Software was taken by Tom Coates in a post
about [cyborgisation and augmentation] when he argued that a useful working definition might
simply constitute the aumentation of human's socialising and networking abilities by software,
complete with ways of compensating for the overloads this might engender. A further addition to
this theoretical framework might be to add an extra part of to this formula - suggesting that social
software consists not only of the facilitation of human beings networking behaviour, but also
their collaborative behaviour - with social software then being a way of directly or indirectly
facilitating the creation of things...

A final fun definition is [What kind of social software are you?].

History

Near the end of 2002, the term "social software" was gaining ground due mostly to the efforts of
ClayShirky, the [iSociety] project, and the [The O'Reilly Emerging Technology Conference
47
2003]. Shirky held a widely publicized "Social Software Summit", which can be best
summarized by its:

"Every time social software improves, it is followed by changes in the way groups work
and socialize. One consistently surprising aspect of social software is that it is impossible
to predict in advance all of the social dynamics it will create. Recognizing this, the Social
Software Summit seeks to bring together a small group of practitioners and theorists to
share experiences in writing social software or thinking about its effects."

From a similar event a few weeks earlier in London, TomCoates? That "The panel was
essentially about the next level of community software and community sites online. Interestingly,
though, the word 'community' was almost totally unused through the whole occasion. Perhaps for
reasons I don't as yet understand, that word has become suddenly unfashionable. Instead we were
talking about 'social software'."

Another major development in the term's popularity came when MattJones sparked a long
discussion on his weblog in an attempt to define the beast, entitled [Defining social software].

One might surmise the term will be in fashion with the WebLogs at least until the Emerging
Technology conference, and maybe a few months later, when it stands the danger of being
forgotten in the typically blog way, a la PeerToPeer.

JPMorgan Predicts 2008 Will Be “Nothing But Net”


JPMorgan’s Internet analyst Imran Khan and his team released a massive 312- page report this
morning titled Nothing But Net that paints a bullish picture for the major Internet stocks

48
(Google, Amazon, Yahoo, eBay, Expedia, Salesforce.com, Ominiture, Value Click, Monster.com,
Orbitz, Priceline, CNET, etc.). Some key takeaways:

—Noting that, in 2007, Internet stocks delivered a 14 percent return versus 5 percent for the
S&P 500, JPMorgan expects 34 percent earnings growth in 2008 for the Internet stocks it covers
versus 8 percent earnings growth for the S&P 500.

—In general, as broadband penetration continues to rise, so do e-commerce revenues:

49
—But advertising revenues actually outpace the adoption of broadband:

—Free cash flow at large Internet companies will keep going up, fueling M&A and share
buybacks. JPMorgan estimates that free cash flow among just five of the top Internet companies
(Google, Yahoo, Amazon, eBay, and Expedia) will rise from $8.8 billion last year to $12.5 billion
in 2008. That is a lot of money for Web 2.0 acquisitions. Top acquirers Yahoo and Google, for

50
instance, each spend about a third of their free cash flow on acquisitions.

—Search advertising will continue to dominate, rising from $22 billion globally last year to $50
billion in 2010. Here is JPMorgan’s forecast for the U.S. search advertising market (it expects

51
global search revenues to rise 38 percent in 2008 to $30.5 billion):

52
—And here is its forecast for the U.S. graphical advertising market. Average CPMs for online
ads, which bottomed in 2007 at $3.31, will start to rise again.

—As global GDP continues to grow faster than U.S. GDP (3.9 percent versus 2.2 percent in
2007), Internet companies with global reach will benefit. Amazon, eBay, and Google all get
about half their revenues from international markets. Yahoo gets only a quarter of its revenues

53
from abroad.

54
World Wide Web: Land of Free Stuff
He’ll never forget the exhilaration I felt logging onto pioneering Internet service provider
Prodigy nearly 15 years ago. In the Jurassic period of the World Wide Web, surfing was
painstakingly slow and there was very little compelling content to be found. Still, there was an
irresistible quality to everything before my eyes: Aside from phone bills and the $15 monthly
connection fee, it took me to a virtual world where almost everything was exquisitely free.

The Internet has evolved beyond that prehistoric phase, and these days no longer do I think of
everyday online features like e-mail, news feeds, and instant messaging, as things I am getting
for free—I expect them as a condition of basic service. But I still have that wanderlust, and I
continue to search for other free goodies online that would cost me cold hard cash if I walked
into a store.

This year, e-commerce is projected to be a $259 billion business, up 18% from 2006, according
to market researcher Forrester Research (FORR). That's a mind-numbing figure, but it doesn't
mean everything online has a price tag. The list of free things you can get is as nearly extensive
as the Internet itself, and includes everything from circus tickets to booze, including golf lessons,
gift cards, pets, even a college education.

World at Your Fingertips

What have I found? I've collected 101 of the very best freebies—including enough free software
to run your own business or become a YouTube video mogul—all without putting my hand in my
pocket. Of course, I did have to give something, even if it wasn't money. In some cases, I had to
click on an ad or watch a video. In others, information about me—such as how I spend my time
online, or how I spend my money—was so valuable it entitled me to free products.

There are plenty of freebies to go around. I still watch TV, but I stopped paying for cable and get
a lot more of my entertainment needs filled from the Internet. Radio sites like Pandora and TV
sites like Joost serve more content that's customized to my tastes, with some ads on the side. If
I'm feeling more adventurous I head over to WWITV, where TV channels from all over the world
are streamed in real time. I don't know how much it would cost to get news broadcasts from Fiji
on my home TV set, but my hunch is that it would take a very large satellite dish.
55
Handing out free product samples is not a novel form of advertising, but on the Web it's super
easy to find companies willing to give you enough products to stock every room of your house.
Finding a free sample of your favorite fragrance or shampoo is as easy as punching the name into
Google (GOOG). Dig a little deeper and you can find lots of samples including coffee, hot sauce,
vodka, and treats for your furry friend. And thanks to Trojan (CHD), everyone is entitled to one
free condom per year. All you have to do is go to their Web site and fill out a form.

Recognizing Your Limits

Marketers aren't the only ones feeding my love of free things on the Web. A public community of
like-minded Web users is the driving force behind the reference site Wikipedia, and the same
principle is at work on a host of open-source software programs that are in many cases as useful
as their pricey counterparts. While the basic version of Microsoft's (MSFT) popular Office suite
runs as high as $400, there's no charge for Sun Microsystems' (JAVA) OpenOffice.org, a suite of
office productivity software that includes word processing, a spreadsheet application, a
presentation tool, and more. It's not Microsoft Office, but its close enough for me.

Many tech companies also give out samples of their applications—or give you access with some
limits on their usability. Grisoft sells one of the most popular antivirus products, AVG Anti-Virus,
for $53 and up, but the free version is just as effective if you can do without firewalls, spam-
blockers, and technical support. I sometimes use voice-over-Internet-protocol service Skype
(EBAY) to make free long-distance calls to another Skype user, but I've never once ponied up for
the for-pay extras, like calls to mobile phones and landlines.

I've also discovered if you want people to do good, make it easy—and free. Local communities
that want to reduce environmental waste while finding some cool reusable items for free have
started Freecycle.org, a swapping site. Just find the group closest to where you live and post
things you want to get rid of or take off the hands of others. Recent items up for grabs in my area
of Brooklyn, N.Y.: a typewriter, an aquarium, and a cat named Molly.

Online Learning

I'm still paying off student loans, but they might have been avoided altogether had I attended
college for free, online. Since 2002, the Massachusetts Institute of Technology has offered study
56
materials including video lectures, notes, and exams on its MIT Open Courseware site, a
program for which sponsoring organizations kicked in $29 million to underwrite. You can't earn
a degree, but you can find material for just about each one of MIT's 1,800 courses online, similar
programs have been rolled out at around 160 schools around the world.

As a taxpayer, I'm also entitled to my share of online freebies from Uncle Sam. Every year, I
make a point to check my credit rating at AnnualCreditReport.com, a site and service created as a
result of the Fair and Accurate Credit Transactions Act of 2003. And at the Web site of the
Government Printing Office, a public agency created in 1860 to give all citizens access to
government publications, I can browse Congressional bills, read every State of the Union
Address since 1992, and check in on the country's budget.

What does the future hold for the free Web? I posed this question to Lawrence Lessig, author of
Free Culture and founder of Creative Commons, a nonprofit organization dedicated to increasing
the availability of free creative work. He believes the "loss leader" model, of marketers making
one product or service free or cheap to drive sales to other offerings—such as Apple (AAPL)
using its iTunes music service to move iPods—will be around for years to come. But he sees
promise in online communities such as open source software developers, who are able to
produce a free product that supports the community and not the profits of a corporation. "There's
a very big growth in the number of people who believe things should be absolutely free [on the
Web]," he says.

57

You might also like