Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Refresh The Net: Why The Internet Needs A Makeover?

Welcome to Refresh The Net

We are still using the Internet infrastructure we created in the last boom. Our 21st century
business data is running on 20th century technology. It is time we refreshed our ideas and
our platforms to give us the new building blocks of innovation. New technologies and ideas —
cloud computing, virtualization, SaaS and ‘infrastructure on demand’ — hold promise for
entrepreneurs and herald a new era of business. Refresh the Net, brought to you by PEER 1,
examines the future of Internet infrastructure and the ideas that will be discussed at GigaOM’s
Structure 08 conference.

Om Malik
Founder and Editor-in-Chief, GigaOM

TABLE OF CONTENTS
The Geography of Internet Infrastructure 3

Why Google Needs its Own Nuclear Plant 5

Supercomputers, Hadoop, MapReduce and the Return to a Few Big Computers 7

Defogging Cloud Computing: A Taxonomy 8

Web 2.0, Please Meet Your Host, the Internet 10

When Is the Right Time to Launch Your Own Cloud? 12

Why Cloud Computing Needs Security 14

10 Reasons Enterprises Aren’t Ready to Trust the Cloud 16

The Craft: Automation and Scaling Infrastructure 18

Is Infrastructure the New Marketing Medium? 20

Achieving Equality is Critical to the Future of the Internet 22

Does the Internet Need More Roads or Better Traffic Signals? 24

About GigaOM 26
Refresh The Net: Why The Internet Needs A Makeover?
The Geography of Internet Infrastructure

By Rich Miller

As computing moves into the cloud, the geography of Internet infrastructure is in flux. Data centers
historically have been clustered in major Internet markets, including Silicon Valley, New York, Northern
Virginia and Dallas. That map is changing rapidly, as data centers are springing up in central parts of
the country — and often in rural areas not previously known as technology hubs.

The price of power and the availability of grid capacity are driving many site location decisions, resulting
in huge projects in places such as Council Bluffs, Iowa and Quincy, Wash. Do these out-of-the-way
locales represent the future of the data center industry

Yes and no. Data-center site location has become more complex, reflecting a segmentation driven by
several classes of end-users:

• Search engines and cloud-computing server farms seek out cheap, clean power — lots of it.
• Corporate data centers are guided by the geography of business continuity and the need
to move people and data.
• Web hosts and co-location providers stay close to the customers, usually in major markets.
• Content delivery networks (CDNs) and companies in VoIP and video locate in peering
hubs and carrier hotels.

Colo providers and CDNs will continue to focus on the major Internet markets. But search engines
and corporate data centers present a huge opportunity for regions that until recently would not have
been considered as candidates for data center development. As a result, cities once dismissed as
second-tier destinations are now seeing vibrant growth of mission-critical facilities.

The biggest cloud builders, Google and Microsoft, require huge amounts of cheap power and open
land. Microsoft initiated the siting trend with its decision to build a 470,000-square-foot data center
in Quincy, which offers unusually cheap hydropower from local dams. Google has also pursued a
rural strategy, announcing
But search engines and corporate data centers present $600-million data-center
a huge opportunity for regions that until recently projects in North Carolina,
would not have been considered as candidates for South Carolina, Oklahoma
and Iowa in 2007.
data center development.

Enterprise data-center customers also giving greater weight to power pricing, but their decisions are
also influenced by disaster recovery strategies and the availability of a skilled IT workforce. This has

3
Refresh The Net: Why The Internet Needs A Makeover?
The Geography of Internet Infrastructure (con’t)

led many large enterprise companies to consider new markets for data centers, especially since the
9/11 terrorist attacks underscored the need for back-up data centers outside of New York and
Washington. A study of data center costs by The Boyd Group has highlighted the affordability of
markets such as Sioux Falls, S.D., and Tulsa, Okla.

Among the biggest winners in the battle for enterprise data centers have been Austin and San Antonio.
Austin won a $450-million Citigroup data center, and two large HP data centers. In San Antonio,
Microsoft’s announcement of a $550-million data center has been followed by new data-center projects
by the NSA, Stream Realty, Christus Health Systems and Power Loft.

In all cases we see that energy costs, environmental impact, and social and economic supply affect
the location of data centers powering both the enterprise and the cloud. As cloud computing gains
mind share and market share, it will continue to remake the geography of Internet infrastructure. The
massive scalability requirements of cloud platforms will drive construction of ever-larger data centers,
offering a physical symbol that, while the Internet is everywhere, it lives in a data center — perhaps
one near you.

Rich Miller is the editor of Data Center Knowledge which provides daily news and analysis about the data
center industry.

4
Refresh The Net: Why The Internet Needs A Makeover?
Why Google Needs it’s Own Power Plant

By Surj Patel

Indexing the world’s information and making it accessible takes a lot of people, a lot of machines and
a lot of energy.

I was talking to a good friend recently and reported some hearsay about how a server now costs more
in its useful life than it costs to buy. I found that amazing, but his response was even more astounding.
“Well, we should put them in poor people’s houses to give them heat,” he quipped.

It sounds dumb at first, but really, it’s pure genius. If that much energy is being used, and half of
that energy is used for cooling, we could put those servers to work as electric heaters. The “host
families” could also get some broadband access, and institutions would save on data center build-outs.
It’s a shame that our culture and the technical practicalities of distributed computing make the
idea impractical.

But it got me thinking. How much energy really is burned in those big data centers? What follows next
is guesstimation and inference based on popular opinion and, er, Google search returns ( I may
appear to pick on Google, but it’s just because it happens to be a convenient example…)

• Google is rumored to have anywhere between half a million and 1 million machines in data
centers. around the world. I am assuming it is the largest single-purpose commercial
installation that we know about. (Lets not think about the government’s data demands
for now.)
• Each machine consumes about 500 watts of energy, including cooling systems.
• Energy overhead for networking and other support structures is nominal, so I’ll ignore it
for my guesstimate.

So, let’s take the worst case here: 1,000,000 Machines using 500 watts of energy an hour = half
a gigawatt an hour.

Wow. That’s a lot. In Google’s own words, that’s about half what a city the size of San Francisco
needs every hour.

That poses a worrying


1,000,000 Machines using 500 watts of energy an
thought: Information on the
hour = half a gigawatt an hour. Wow. That’s a lot. In
web is increasing in an
Google’s own words, that’s about half what a city the
exponential manner, and
size of San Francisco needs every hour. Google will increase its

5
Refresh The Net: Why The Internet Needs A Makeover?
Why Google Needs it’s Own Power Plant (con’t)

capacity to meet demand. Even doubling that energy use would require the kind of power produced
by a mid-size nuclear reactor. Now, I’m sure my calculations are probably a little over-dramatic, but
really the quantities are on order and kind of astonishing when you think about it. And Google is not
the only one. The cloud computing craze has to be powered somehow and the cloud’s power will
come from a huge collection of these datacenters.

We could “Google” less and index only some of the world’s information — but heading back to ignorance
doesn’t seem to be the right path to me. Instead, what we need is to rethink the “faster, better by
throwing in more power” mentality of processor design and think around the physics of current computing
and energy supply. It’s no easy feat. Perhaps also we need algorithmic innovation as well (Green
PageRank, anyone?) Google is stepping up the mark with its investment programs in sustainable clean
energy and building closer to energy supplies.

Collectively, chip designers, programmers, users, policy makers and academics will have to create a
gestalt of contributions that run leaner, cleaner and cooler. Perhaps they should look at Google’s
lead as a start.

Surj Patel is the Vice President of Development and Formats at Giga Omni Media. Previously, Surj has held
positions with the BBC, Orange, Glubble and Values Of N as well as starting and running an award winning
design business in the UK with partners.

6
Refresh The Net: Why The Internet Needs A Makeover?
Supercomputers, Hadoop, MapReduce and
the Return to a Few Big Computers

By Alistair Croll

Yahoo announced yesterday it would collaborate with CRL to make supercomputing resources available
to researchers in India. The announcement comes on the heels of Yahoo’s Feb. 19 claim to have the world’s
largest Hadoop-based application now that it’s moved the search webmap to the Hadoop framework.

There are a number of Big Computing problems today. In addition to Internet search, cryptography,
genomics, meteorology and financial modeling all require huge computing resources. In contrast to
purpose-built mainframes like IBM’s Blue Gene, many of today’s biggest computers layer a framework
atop commodity machines.

Google has MapReduce and the Google File System. Yahoo now uses Apache Hadoop. The SETI@Home
screensaver was a sort of supercomputer. And hacker botnets, such as Storm, may have millions of
innocent nodes ready to work on large tasks. Big Computing is still big — it’s just built from lots of
cheap pieces.

But supercomputing is heating up, driven by two related trends: On-demand computing makes it easy
to build a supercomputer, if only for a short while; and Software-as-a-Service means fewer instances
of applications serving millions of users from a few machines. What happens next is simple economics.

Frameworks like Hadoop scale extremely well. But they still need computers. With services like Amazon’s
EC2 and S3, however, those computers can be rented by the minute for large tasks. Derek Gottfrid of
the New York Times used Hadoop and Amazon to create 11 million PDF documents. Combine on-
demand computing with framework to scale applications and you get true utility computing. With Sun,
IBM, Savvis and others
introducing on-demand
On-demand computing makes it easy to build a super- offerings, we’ll soon see
computer, if only for a short while; and Software-as-a- everyone from enterprises
Service means fewer instances of applications serving to startups to individual
millions of users from a few machines. hackers buying computing
instead of computers.

At the same time, Software-as-a-Service models are thriving. Companies like Salesforce.com, Rightnow and
Taleo replaced enterprise applications with web-based alternatives and took away deployment and man-
agement headaches in the process. To stay alive, traditional software companies (think Oracle and Microsoft)
need to change their licensing models from per-processor to per-seat or per-task. Once they do this,
simple economies of scale dictate that they’ll run these applications in the cloud, on behalf of their clients.
And when you’ve got that many users for an application, it’s time to run it as a supercomputing cluster.

Maybe we’ll only need a few big computers, after all. And, of course, billions of portable devices to
connect to them.

7
Refresh The Net: Why The Internet Needs A Makeover?
Defogging Cloud Computing: A Taxonomy

By Michael Crandell

We’re heading quickly into the next big chapter of the Internet revolution, with tremendous buzz and
excitement around the development of cloud computing. As with all major disruptive changes in tech-
nology, cloud computing has generated a flurry of definitions in the press and blogosphere, with
acronyms and metaphors flying. At the same time, it’s important to remember that companies that
are now deploying in
the cloud have common
The term “cloud computing” has become a catch-all
problems they’re trying to
for any information technology solution that does
solve, albeit in different
not use in-house data center or traditional managed ways using different
hosting resources. approaches. We’ve found
it helpful to create a map-
ping of these approaches – a taxonomy of the cloud, if you will – to make it simpler to understand
product offerings and what benefits they provide for customers.

The term “cloud computing” has become a catch-all for any information technology solution that does
not use in-house data center or traditional managed hosting resources. Self-defined cloud offerings
range from Amazon Web Services and Google Apps and App Engine to Salesforce and even Apple’s
new MobileMe service for iPhone 3G.

Among all these offerings is a common thread and key differentiator from the past – the notion of
providing easily accessible compute and storage resources on a pay-as-you-go, on-demand basis,
from a virtually infinite infrastructure managed by someone else. As a customer, you don’t know where
the resources are, and for the most part, you don’t care. What’s really important is the capability to
access your application anywhere, move it freely and easily, and inexpensively add resources for
instant scalability. When customers have the power to turn on and off 10, 100, 1,000 or even 10,000
servers as needed –
whether because a hot
As a customer, you don’t know where the resources
social Web application
are, and for the most part, you don’t care...When
takes off, or a batch
customers have the power to turn on and off 10, 100,
processing job starts
1,000 or even 10,000 servers as needed...that is the to really crunch – that
core reason cloud computing is growing so fast. is the core reason cloud
computing is growing
so fast. It represents a true democratization of Web computing, and it’s changing the way IT infrastructure
is being delivered and consumed.

8
Refresh The Net: Why The Internet Needs A Makeover?
Defogging Cloud Computing: A Taxonomy (con’t)

Our taxonomy of cloud computing – which draws also on others’ work – divides product offerings
into three layers:

• Applications in the cloud (Salesforce and other SaaS vendors exist here today) provide
turnkey end-user software, normally browser-based, with a specific functional focus. They
are the easiest to start ‘consuming,’ but also the least flexible. They grow out of the ASP
world of the late ‘90s and encompass the SaaS offerings of today.
• Platforms in the cloud (Google’s AppEngine, Mosso, Heroku are good examples) offer
turnkey environments into which a developer can plug in code written within certain
guidelines or restrictions (programming language, data-store model, etc.), and scaling
is performed “behind the curtains” by the platform.
• Infrastructure in the cloud (Amazon Web Services, Flexiscale, and others) is the most
flexible offering, providing compute and storage resources in a primitive, close-to-bare-metal
API interface, that can be leveraged in a multitude of ways with few restrictions – but which
also require more up-front work to design and implement. This is where our company
RightScale focuses – we offer a cloud management platform for low-level ‘infrastructure
in the cloud’ resources that preserves flexibility and power, while offering quick deployment
and easy management.

Today, we’re all witnessing the beginnings of a huge migration to cloud computing, simply because,
for many applications, it is a better way to organize and manage IT infrastructure resources. Financial
analysts like those from Merrill Lynch say there is a $160 billion addressable market opportunity for
cloud computing. That’s a big number – but not difficult to imagine. Even though we’re still in the
early days, there are leading indicators that suggest how fast the cloud is forming. Forrester reported
that bandwidth for Amazon’s EC2 and S3 in Q4 of 2007 exceeded all of the global Amazon.com web
properties combined during their busiest time of the year. Take a look at Jeff Barr’s blog for an impressive
graph of this data.

The technology industry has been moving toward open standards for some time, and cloud computing
is the next logical step. Cloud solutions – at any of the three levels described above – are attractive
for just about any company with an application that runs in a data center or with a hosted provider,
that doesn’t want to reinvent the wheel or pay a premium. Multi-tenancy, low cost (metered hourly vs.
monthly ), high availability with clustered servers (one goes down, spin one up automatically), virtually
infinite scalability with a click –- all this is here, and here to stay. Our job as cloud vendors is to make
it easily accessible and manageable, deliver best practices and continue to refine the architecture.

Michael Crandell is CEO and co-founder of RightScale Inc., a Santa Barbara, Calif.-based company that
offers a cloud computing management platform, tools and services.

9
Refresh The Net: Why The Internet Needs A Makeover?
Web 2.0, Please Meet Your Host, the Internet

By Allan Leinwand

I have a major problem with many of the Web 2.0 companies that I meet in my job as a venture capitalist:
They lack even the most basic understanding of Internet operations.

I realize that the Web 2.0 community generally views Internet operations and network engineering as
router-hugging relics of the past century desperately clutching to their cryptic, SSH-enabled command
line interfaces, but I have recently been reminded by some of my friends working on Web 2.0 applications
that Internet operations can actually have a major impact on this century’s application performance
and operating costs.

So all you agile programmers working on Ruby-on-Rails, Python and AJAX, pay attention: If you want
more people to think your application loads faster than Google and do not want to pay more to those
ancient phone companies providing your connectivity, learn about your host. It’s called the Internet.

As my first case in point, I was recently contacted by a friend working at a Web 2.0 company that
just launched their application. They were getting pretty good traction and adoption, adding around a
thousand unique users per day, but just as the buzz was starting to build, the distributed denial-of-
service (DDOS) attack arrived. The DDOS attack was deliberate, malicious and completely crushed
their site. This was not an extortion type of DDOS attack (where the attacker contacts the site and
extorts money in exchange for not taking their site offline), it was an extraordinarily harmful site
performance attack that
rendered that site virtually
Internet operations can actually have a major unusable, taking a non-
impact on this century’s application performance Google-esque time of
and operating costs. about three minutes
to load.

No one at my friend’s company had a clue as to how to stop the DDOS attack. The basics of securing
the Web 2.0 application against security issues on the host system — the Internet — were completely
lacking. With the help of some other friends, ones that combat DDOS attacks on a daily basis, we
were able to configure the routers and firewalls at the company to turn off inbound ICMP echo requests,
block inbound high port number UDP packets and enable SYN cookies. We also contacted the upstream
ISP and enabled some IP address blocking. These steps, along with a few more tricks, were enough
to thwart the DDOS attack until my friend’s company could find an Internet operations consultant to
come on board and configure their systems with the latest DDOS prevention software and configurations.

Unfortunately, the poor site performance was not missed by the blogosphere. The application has
suffered from a stream of bad publicity; it’s also missed a major window of opportunity for user adoption,
which has sloped significantly downward since the DDOS attack and shows no sign of recovering.

10
Refresh The Net: Why The Internet Needs A Makeover?
Web 2.0, Please Meet Your Host, the Internet (con’t)

So if the previous paragraph read like alphabet soup to everyone at your Web 2.0 company, it’s high time you
start looking for a router-hugger, or soon your site will be loading as slowly as AOL over a 19.2 Kbps modem.

Another friend of mine was helping to run Internet operations for a Web 2.0 company with a sizable amount
of traffic – about half a gigabit per second. They were running this traffic over a single gigabit Ethernet link
to an upstream ISP run by an ancient phone company providing them connectivity to their host, the Internet.
As their traffic steadily increased, they consulted the ISP and ordered a second gigabit Ethernet connection.

Traffic increased steadily and almost linearly until it reached about 800 megabits per second, at which
point it peaked, refusing to rise above a gigabit. The Web 2.0 company began to worry that either their
application was limited in its performance or that users were suddenly using it differently.

On a hunch, my friend called me up and asked that I take a look at their Internet operations and
configurations. Without going into a wealth of detail, the problem was that while my friend’s company
had two routers, each with a gigabit Ethernet link to their ISP, the BGP routing configuration was
done horribly wrong and
resulted in all traffic using
If the previous paragraph read like alphabet soup to
a single gigabit Ethernet
everyone at your Web 2.0 company, it’s high time you
link, never both at the same
start looking for a router-hugger, or soon your site will
time. (For those interested,
be loading as slowly as AOL over a 19.2 Kbps modem.
both gigabit Ethernet links
went to the same upstream
eBGP router at the ISP, which meant that the exact same AS-Path lengths, MEDs, and local preferences
were being sent to my friend’s routers for all prefixes. So BGP picked the eBGP peer with the lowest
IP address for all prefixes and traffic). Fortunately, a temporary solution was relatively easy (I configured
each router to only take half of the prefixes from each upstream eBGP peer) and worked with the ISP
to give my friend some real routing diversity.

The traffic to my friend’s Web 2.0 company is back on a linear climb – in fact it jumped to over a
gigabit as soon as I was done configuring the routers. While the company has their redundancy and
connectivity worked out, they did pay their ancient phone company ISP for over four months for a
second link that was essentially worthless. I will leave that negotiation up to them, but I’m fairly sure
the response from the ISP will be something like, “We installed the link and provided connectivity,
sorry if you could not use it properly. Please go pound sand and thank you for your business.” Only by
using some cryptic command line interface was I able to enable their Internet operations to scale with
their application and get the company some value for the money they were spending on connectivity.

Web 2.0 companies need to get a better understanding of the host entity that runs their business, the
Internet. If not, they need to need to find someone that does, preferably someone they bring in at
inception. Failing to do so will inevitably cost these companies users, performance and money.

Allan Leinwand is a Partner at Panorama Capital where he focuses on technology investments. Allan is a
frequent contributer on GigaOM. He co-authored “Cisco Router Configuration” and “Network Management:
A Practical Perspective” and has been granted a patent in the field of data routing.

11
Refresh The Net: Why The Internet Needs A Makeover?
When Is the Right Time to Launch
Your Own Cloud?

By Alistair Croll

New York-based cloud computing startup 10gen launched today with backing from CEO Kevin Ryan’s
startup network, Alleycorp. It makes sense, since with several ventures already under his belt, Ryan
probably has enough customers to both justify the buildout and break even right away. And the founders
know scaling, having built out ad network DoubleClick.

But is it always a good idea to build your own cloud when you get big enough to do so?

Yesterday, for example, I had a great chat with Lana Holmes, a Bay area startup maven, about product
management and how to focus on doing the one thing that matters to your company. “The example I
use is Amazon,” she said. “They just focused on selling books. And look at them now.”

At their root, Amazon’s EC2 and S3 offerings are the result of excess capacity from sales. The offerings
have paved the way for an online world in which compute power is a commodity. The company has
subsequently built, on top of those offerings, a layer of billing, services and support for them.

The motivation behind the creation of 10gen is similar: If you successfully launch a number of web firms,
at a certain point the economies of scale of others’ clouds starts fall away and you may as well run
your own.

It’s easier than ever to launch your own cloud. You’ve got grid deployment tools from folks like 3Tera
and Enomaly. Virtualization management can be had from the likes of Fortisphere, Cirba and ManageIQ,
to name just a few. And license management (built into cluster deployment from companies like
Elastra) is knocking down some of the final barriers to building a cloud that you can offer to third
parties as well.

But imagine a world in which there are hundreds of clouds to choose from. Moving a virtual machine
is supposed to be as easy
as dragging and dropping,
It’s clear that good old-fashioned branding, plus a
and cloud operators will
healthy dose of experience, will be key to winning
hate that. They’ll resist, put-
as a cloud provider.
ting in proprietary APIs and
function calls. Applications
and data won’t be portable. You’ll be locked in to a cloud provider, who will then be free to charge
for every service. Sound familiar?

12
Refresh The Net: Why The Internet Needs A Makeover?
When Is the Right Time to Launch
Your Own Cloud? (con’t)

My guess is that as the cloud computing market grows and matures, one (or more) of three things
will happen:

• Standardization and portability, in which consortia of cloud vendors agree to a standard


set of APIs and coding constraints that guarantee interoperability. This isn’t just about the
virtual machines; they’re fairly standard already. It’s about the data storage systems and
the control APIs that let cloud users manage their applications. This is the mobile phone
model, where number portability is guaranteed and there are well-known services like voice
mail and call forwarding.
• Shared grid computing, in which smaller clouds sell their excess capacity to bigger clouds.
This would let the big cloud dominate while paying the smaller cloud just enough to stop it
from launching an offering of its own. Think of this as the electric company model, selling
computing between clouds the way a solar-powered household can pump excess electricity
into the power grid.
• Specialization, where clouds are good at certain things. You’ll get OS-specific clouds (Heroku
is already providing optimized Rails deployment atop EC2.) It’s only a matter of time before
we see clouds tailored for specific industries or the services the offer — anything from media
to microtransactions. Sort of like the cable channel model, with specialized programming
that allows niche channels too survive.

Whatever happens, it’s clear that good old-fashioned branding, plus a healthy dose of experience,
will be key to winning as a cloud provider.

During a panel at Interop last week that I sat on with folks from Amazon, Opsource, Napera, Syntenic
and Kaazing, I asked the audience how many of them would entrust Microsoft to run a cloud with
Microsoft applications, and how many would prefer to see Amazon running a Microsoft kernel on EC2.
Roughly 75 percent said they’d trust Amazon to run Microsoft’s own apps rather than Microsoft.

So when’s the right time to launch a cloud computing offering of your own? Unless you have the
branding and reputation to support that launch — or you can re-sell excess capacity to partners or
specialize — maybe never.

Alistair Croll is a senior analyst at research firm Bitcurrent, covering emerging web technologies, networking,
and online applications and is a frequent contributer on GigaOM. Prior to Bitcurrent, Alistair co-founded
Coradiant, a leader in online user monitoring, as well as research firm Networkshop.

13
Refresh The Net: Why The Internet Needs A Makeover?
Why Cloud Computing Needs Security

By Alistair Croll

Bribery, extortion and other con games have found new life online. Today, botnets threaten to take
vendors down; scammers seduce the unsuspecting on dating sites; and new viruses encrypt your
hard drive’s contents, then demand money in return for the keys.

Startups, unable to bear the brunt of criminal activity, might look to the clouds for salvation: After all,
big cloud computing providers have the capacity and infrastructure to survive an attack. But the
clouds need to step it up; otherwise, their single points of failure simply provide more appealing targets
for the bad guys, letting them take out hundreds of sites at once.

Last Friday, Amazon’s U.S. site went off the air, and later some of its other properties were unavailable.
Lots of folks who wouldn’t let me quote them, but should know, said that this was a denial-of-service
attack aimed at the company’s load-balancing infrastructure. Amazon is designed to weather huge
amounts of traffic, but it was no match for the onslaught.

When it comes to online crime, the hackers have the advantage. A simple Flash vulnerability nets them
thousands of additional zombies, meaning attacks can come from anywhere. During Amazon’s attack,
legitimate visitors were greeted with a message saying they were abusing Amazon’s terms of service,
which could mean that those visitors were either using PCs that were part of the attack, or were on
the same networks as infected attackers. The botnets are widespread, and you can’t block them
without blocking your customers as well.

Other rackets give the attacker an unfair edge, too: It takes an army of machines to crack the 1024-bit
encryption on a ransom virus, but only one developer to write it.

A brand like Amazon can weather a storm, because people will return once the storm has passed. But
just look at the Twitter exodus to see how downtime from high traffic loads can tarnish a fledgling brand.
Slideshare survived such
an attack in April, and
When it comes to online crime, the hackers have the while many other sites
advantage. A simple Flash vulnerability nets them admit to being threat-
thousands of additional zombies, meaning attacks ened, they won’t go on
can come from anywhere. the record as saying so.

Up-and-coming web sites are often great targets, as they often lack the firewalls, load-balancers and
other infrastructure needed to fight back. And it’s not just criminals: In some cases, the attacker is a
competitor; in others, it’s someone who just doesn’t like what you’re doing.

14
Refresh The Net: Why The Internet Needs A Makeover?
Why Cloud Computing Needs Security (con’t)

Fighting off hackers is expensive. Auren Hoffman calls this the Black Hat Tax, and points out that many
top-tier Internet companies spend a quarter of their resources on security. No brick-and-mortar company
devotes this much attention to battling fraud.

Wanting to survive an attack is yet another reason for startups to deploy atop cloud computing offerings
from the likes of Amazon, Google, Joyent, XCalibre, Bungee, Enki and Heroku. But consolidation of the
entire Internet onto only a few clouds may be its Achilles’ heel: Take down the cloud, and you take down
all its sites. That’s one reason carriers like AT&T and CDNs like Akamai are betting that a distributed
cloud will win out in the end.

Cloud operators need to find economies of scale in their security models that rival the efficiencies
of hackers. Call it building a moat for the villagers to protect them from the barbarians at the gate.
Otherwise, this will remain a one-sided battle that just gives hackers more appealing targets.

15
Refresh The Net: Why The Internet Needs A Makeover?
10 Reasons Enterprises Aren’t Ready
to Trust the Cloud

By Stacey Higginbotham

Many entrepreneurs today have their heads in the clouds. They’re either outsourcing most of their
network infrastructure to a provider such as Amazon Web Services or are building out such infrastructures
to capitalize on the incredible momentum around cloud computing. I have no doubt that this is The
Next Big Thing in computing, but sometimes I get a little tired of the noise. Cloud computing could
become as ubiquitous as personal computing, networked campuses or other big innovations in the
way we work, but it’s not there yet.

Because as important as cloud computing is for startups and random one-off projects at big companies,
it still has a long way to go before it can prove its chops. So let’s turn down the noise level and add
a dose of reality. Here are 10 reasons enterprises aren’t ready to trust the cloud. Startups and SMBs
should pay attention to this as well.

1. It’s not secure. We live in an age in which 41 percent of companies employ someone to read their
workers’ email. Certain companies and industries have to maintain strict watch on their data at
all times, either because they’re regulated by laws such as HIPAA, Gramm-Leach Bliley Act or
because they’re super paranoid, which means sending that data outside company firewalls isn’t
going to happen.

2. It can’t be logged. Tied closely to fears of security are fears that putting certain data in the cloud
makes it hard to log for compliance purposes. While there are currently some technical ways
around this, and undoubtedly startups out there waiting to launch their own products that make it
possible to log “conversations” between virtualized servers sitting in the cloud, it’s still early days.

3. It’s not platform agnostic. Most clouds force participants to rely on a single platform or host only
one type of product. Amazon Web Services is built on the LAMP stack, Google Apps Engine locks
users into proprietary formats, and Windows lovers out there have GoGrid for supporting computing
offered by the ServePath guys. If you need to support multiple platforms, as most enterprises do,
then you’re looking at multiple clouds. That can be a nightmare to manage.

4. Reliability is still an issue. Earlier this year Amazon’s S3 service went down, and while the entire
systems may not crash, Mosso experiences “rolling brownouts” of some services that can effect
users. Even inside an enterprise, data centers or servers go down, but generally the communication
around such outages is
better and in many cases,
fail-over options exist.
Cloud computing could become as ubiquitous as Amazon is taking steps
personal computing, networked campuses or other toward providing (pricey)
big innovations in the way we work, but it’s not information and support,
there yet. but it’s far more comforting
to have a company-paid
IT guy on which to rely.

16
Refresh The Net: Why The Internet Needs A Makeover?
10 Reasons Enterprises Aren’t Ready
to Trust the Cloud (con’t)

5. Portability isn’t seamless. As all-encompassing as it may seem, the so-called “cloud” is in fact made
of up several clouds, and getting your data from one to another isn’t as easy as IT managers would
like. This ties to platform issues, which can leave data in a format that few or no other cloud accepts,
and also reflects the bandwidth costs associated with moving data from one cloud to another.

6. It’s not environmentally sustainable. As a recent article in The Economist pointed out, the emergence
of cloud computing isn’t as ethereal as is might seem. The computers are still sucking down
megawatts of power at an ever-increasing rate, and not all clouds are built to the best energy-efficiency
standards. Moving data center operations to the cloud and off corporate balance sheets is kind of
like chucking your garbage into a landfill rather than your yard. The problem is still there but you no
longer have to look at it. A company still pay for the poor energy efficiency, but if we assume that
corporations are going to try to be more accountable with regard to their environmental impact,
controlling IT’s energy efficiency is important.

7. Cloud computing still has to exist on physical servers. As nebulous as cloud computing seems, the
data still resides on servers around the world, and the physical location of those servers is important
under many nation’s laws. For example, Canada is concerned about its public sector projects being
hosted on U.S.-based servers because under the U.S. Patriot Act, it could be accessed by the
U.S. government.

8. The need for speed still reigns at some firms. Putting data in the cloud means accepting the latency
inherent in transmitting data across the country and the wait as corporate users ping the cloud and
wait for a response. Ways around this problem exist with offline syncing, such as what Microsoft
Live Mesh offers, but it’s still a roadblock to wider adoption.

9. Large companies already have an internal cloud. Many big firms have internal IT shops that act as a
cloud to the multiple divisions under the corporate umbrella. Not only do these internal shops have
the benefit of being within company firewalls, but they generally work hard — from a cost perspective—
to stay competitive with outside cloud resources, making the case for sending computing to the
cloud weak.

10. Bureaucracy will cause the transition to take longer than building replacement housing in New Orleans.
Big companies are conservative, and transitions in computing can take years to implement. A good
example is the challenge HP faced when trying to consolidate its data center operations. Employees
were using over 6,000 applications and many resisted streamlining of any sort. Plus, internal IT
managers may fight the outsourcing of their livelihoods to the cloud, using the reasons listed above.

Cloud computing will be big, both in and outside of the enterprise, but being aware of the challenges
will help technology providers think of ways around the problems, and let cloud providers know what
they’re up against.

Stacey Higginbotham has over ten years of experience reporting on business and technology for publications
such as The Deal, the Austin Business Journal, The Bond Buyer and Business Week. She is currently the lead
writer for GigaOM, where she covers both the infrastructure that allows companies to deliver services via the
web, as well as the services themselves.

17
Refresh The Net: Why The Internet Needs A Makeover?
The Craft: Automation and Scaling Infrastructure

By Andrew Shafer

“Progress is made by lazy men looking for easier ways to do things”


— Robert A. Heinlein

Until the late 18th century, craftsmen were a primary source of production. With specialized skills, a
craftsman’s economic contribution was a function of personal quantity and quality, and a skilled artisan
often found it undesirable, if not impossible, to duplicate previous work with accuracy. Plus, there is a
limit to how much a skilled craftsman can do in one day. Scaling up the quantity of crafted goods to
meet increased demand was a question of working more or adding more bodies — both of which
potentially sacrificed quality and consistency.

Today, Internet applications and infrastructure are often the creations of skilled modern craftsmen. The
raw materials are files, users, groups, packages, services, mount points and network interfaces —
details most people never have to think or care about. These systems often stand as a testament to
the skill and vision of a small group or even an individual. But what happens when you need to scale
a hand-crafted application that many people — and potentially the life of a company — depend on?
The drag of minor inefficiencies multiplies as internal and external pressures create the need for more:
more features, more users, more servers and a small army of craftsmen to keep it all together.

These people are often bright and skilled, with their own notions and ideas, but this often leads to
inconsistencies in the solutions applied across an organization. To combat inconsistency, most
organizations resort to complicated bureaucratic change control policies that are often capriciously
enforced, if not totally disregarded — particularly when critical systems are down and the people who
must “sign off” have little
understanding of the
Internet applications and infrastructure are often details. The end result
the creations of skilled modern craftsmen. The raw is an organization that
materials are files, users, groups, packages, services, purposely curtails its
mount points and network interfaces own ability to innovate
and adapt.

Computers are extremely effective at doing the same repetitive task with precision. There must be
some way to take the knowledge of the expert craftsmen and transform it into some kind of a program
that is able to do the same tasks, right? The answer is yes, and in fact, most system administrators
have a tool belt full of scripts to automate some aspects of their systems. For both traditional craftsmen
and system administrators, better tools can increase quantity and quality of the work performed.

18
Refresh The Net: Why The Internet Needs A Makeover?
The Craft: Automation and Scaling Infrastructure (con’t)

Policy-driven automation facilitates both predictability and adaptability, reduces potential human
errors and enables the organization to scale IT infrastructure without a proportional increase in head
count. Commercial options are available, but they’re not entirely transparent, and for small or
medium-sized organizations, they are prohibitively expensive. The open source options are varied,
based on diverse philosophical and functional underpinnings, with different levels of adoption and
community support.

Puppet, which was inspired by years of automation using CFEngine, is a relatively young open source
configuration framework with a thriving community. Using parameterized primitives like files, packages
and services, Puppet’s declarative language can model collections of resources and the relationships
between them using inheritance and composition. Puppet enables consistent management of the
server life cycle, building a
system from a clean operating
Automation allows the knowledge to be invested
system, restarting services
in infrastructure design, and lets the computers
when their configurations
carry out the results of the decisions.
change and decommissioning
references to a retired system.
Furthermore, Puppet uses “resource abstraction,” making the codified configurations portable across
platforms and potentially generic enough to be shared within the community.

Executable, policy-driven automation doesn’t remove the need for knowledge and skill. Automation
allows the knowledge to be invested in infrastructure design, and lets the computers carry out the
results of the decisions. Instead of trying to replicate individual craftsmen and processes, systems
like Puppet herald a bold new future of infrastructure where instead of micromanaging individual
craftsmen we can build vast factories of instantly scalable, on-demand resources by describing how
to solve the problem as a set of strategic rules for the infrastructure to understand and act on —
rather than specifying individual units to be built by an army of craftsmen.

Andrew Shafer, partner at Reductive Labs, has developed high performance scientific computing applications,
embedded Linux interfaces and an eCommerce SaaS platform. He currently works full time on Puppet, a free,
open-source server automation framework available for Linux, Solaris, FreeBSD and OS X.

19
Refresh The Net: Why The Internet Needs A Makeover?
Is Infrastructure the New Marketing Medium?

By Steven Woods

In the first decade of major commercial adoption of the Internet, marketers quickly seized upon the
new media types it provided, such as email, banner ads, search placements, and now the broad
variety of new media options that have appeared in recent years. Marketers, however, adopted these
media types in very much the same way that television, radio, and print media had been used in
prior decades.

Today, progressive marketers are realizing that these new media types can provide a greater depth of
information on prospects’ interest areas and objections — a service that is often more valuable than
the marketing communication itself. A desire to better communicate with prospects based on an
understanding of their true interests is driving a shift toward coordinating all communications into a
common technology platform. Using that common platform to provide deep insight into prospect
interests, will drive innovation within marketing for the next decade.

As information required
A desire to better communicate with prospects based by prospects is increas-
on an understanding of their true interests is driving ingly found online, the
a shift toward coordinating all communications into nuances of what someone
a common technology platform. looked at, what caught
someone’s eye, and what
someone reacted nega-
tively to become as important to a marketer as the nuances of body language are to the salesperson
communicating face-to-face with a prospect. For marketers to succeed in today’s world, they need to
become proficient at reading this “digital body language.”

The value of tracking a prospect’s behavior is directly related to the number of marketing touchpoints
that can be aggregated: Web, email, direct mail, search, downloads, webinars and whitepapers all
tell a piece of the story. Together, they provide direct, actionable insight into the prospect’s propensity
to buy. To repeatably and reliably provide this type of insight, marketers need an infrastructure that
relieves them from the technical details of both launching campaigns across multiple media and tracking
the results of a individual components through to a web site. Without today’s infrastructure, marketing
won’t be able to innovate at the level of today’s expectations.

With this infrastructure in place, new campaigns that coordinate messages, promotions and commu-
nication across media types, in real time, based on a prospect’s actual interest area are possible.
When a prospective condominium buyer spends time looking at two bedroom units with a lake view,
a direct mail offer might be sent highlighting one such unit. When a qualified prospective buyer of

20
Refresh The Net: Why The Internet Needs A Makeover?
Is Infrastructure the New Marketing Medium? (con’t)

network equipment spends significant time digging into technical specifications of a new router, they
might be invited to a detailed technical webinar with the lead engineers of that router.

As marketers explore the


For marketers to succeed in today’s world, they prospect insights that the
need to become proficient at reading this “digital new marketing infrastructure
body language.” provides, while leveraging
the time that is freed up by
having a platform that takes
care of the mundane details of campaign execution, such innovations will accelerate. We will see the
media types that the Internet created — and many media types that existed prior to the Internet —
used in novel ways for innovative campaigns that could never have been considered before.

Steven Woods, co-founder and CTO of Eloqua, leads the company’s product strategy and technology vision
while working with hundreds of today’s leading marketers. Mr. Woods has gained a reputation as a leading
thinker on the transition of marketing as a discipline. Most recently, he was named to Inside CRM’s “Top 25
CRM Influencers of 2007.”

21
Refresh The Net: Why The Internet Needs A Makeover?
Achieving Equality is Critical to the Future
of the Internet

By Dr. Lawrence G. Roberts

Inequality, or “unfairness” in how network capacity is allocated between different homes or computers,
is causing major reductions in the actual realized speed of Internet service for almost every user. The
magnitude of the problem is well beyond what most people understand, with realized access speed
often reduced to as little as a tenth of its potential. For the Internet to truly support all of our imagined
uses — video, voice, gaming, social networking and the like — we must eliminate the basic inequality
inherent in TCP/IP. To put it simply: Each user must receive equal capacity for equal payment.

Let’s consider the residential ISP market. The real goal should be to provide equal capacity to all homes
that have paid the same amount, and on some scale, more to those that paid more.

In the current situation, pricing is flat, and any user, via a “greedy” program like P2P, can capitalize on
TCP’s preference for multi-flow traffic and drag down the average capacity of all other users. So far,
the most common approach to addressing inequality problems is Deep Packet Inspection (DPI),
which literally inspects packet contents to find P2P applications — and then slows them down or
kills them.

However, this inspect-and-destroy approach has led to a new kind of arms race: P2P applications
add encryption and rapidly changing “signatures,” and DPI constantly races to catch up. In a typical
network, DPI finds roughly 70 percent of the P2P traffic, and things will only get more difficult as
encryption becomes the
norm and signatures change
Each user must receive equal even faster. Even at 70
percent detection, the
capacity for equal payment. remaining P2P still slows
down all the normal users
to a third of potential speed.
The problem affects residential users, but it can be even more serious in a school or corporate
environment. It is clear that DPI is doomed as a solution for containing P2P. However, a totally
different solution is possible.

Each cable or DSL concentrator has a maximum capacity which must be shared at any moment. If all
the traffic from each home was rate-controlled to share the total capacity equally, a P2P user with 10
flows would get 10 percent of the capacity per flow when compared to a neighbor downloading a
new application with one flow. Both homes would get the same number of bytes delivered in the same
amount of time. A third neighbor doing something simple, such browsing the web or checking his
email, would get much faster service than before, since his short-duration flow would not experience
any delay or loss. That is, unless he extended his session long enough that the total use neared that

22
Refresh The Net: Why The Internet Needs A Makeover?
Achieving Equality is Critical to the Future
of the Internet (con’t)

of the file transfer users. In that case, he would be treated the same as the others who are consuming
the same amount of capacity for the same price.

Since this “automatic rate equalization” does not require inspection of every packet, it operates at full 10
Gbps trunk rates quite inexpensively compared to using many DPI systems, and the result is complete
network usage equality for all users paying for the same service.

Once inequality is eliminated in the network, application vendors can stop devising techniques that
unfortunately harm other users and start discovering techniques that deliver improved service. Easing
traffic snarls will also bring down the cost of reliable, high-speed Internet service substantially. Without
solving the TCP/IP inequality problem, providing affordable Internet service will become extremely
difficult — if not impossible.

Dr. Lawrence Roberts led ARPANET, the team that birthed the Internet as we know it today. The ARPANET cluster
included distinguished individuals such as Vint Cerf, who created the core protocol TCP/IP that underlies the
infrastructure of our modern IP-based communication systems. We are proud to have Dr. Roberts speak at our
Structure 08 conference and proud to present his thoughts here on the problems P2P and inequality in network
capacity cause for consumers.

23
Refresh The Net: Why The Internet Needs A Makeover?
Does the Internet Need More Roads or
Better Traffic Signals?

By Stacey Higginbotham

If the Internet is a highway, then the companies responsible for maintaining the roads are increasingly
at odds with the ones producing a lot of the traffic. Comcast throttling BitTorrent traffic as a way to
protect network integrity (or so it says) is one example. Another can be found in the arguments of a
British ISP that’s seeking to get the BBC to pay for network upgrades, claiming the broadcaster’s
iPlayer is hogging too much bandwidth.

I’m not going to get into the insanity happening in the UK right now, but what is worth talking about
is how networks can handle the increasing amount of traffic going through their pipes. The request
for funding to build more robust networks made by Simon Gunter, chief of strategy at ISP Tiscali, is
akin to asking car companies to pay a tax for building more roads. It’s one way to address the issue,
but there are other options, among them better traffic management, which would decrease the distance
cars need to travel.

Now that I’ve thoroughly


The request for funding to build more robust networks beaten that metaphor into
made by Simon Gunter, chief of strategy at ISP Tiscali, the ground, let’s talk net-
is akin to asking car companies to pay a tax for work management. It’s an
building more roads. evil phrase, but necessary
in a world in which back-
haul is limited and fiber to
the home is still a luxury. Recall that the FCC had no problem with Comcast engaging in network
management practices, but rather that Comcast “managed” a specific application without disclosing
that fact to consumers. And the application attacked was competing with Comcast’s own
cable offerings.

Many of these media files are delivered via peer-to-peer networks. They’ve long been the most efficient
way to get large amounts of data across a network, and now they’re working hard to be even more
efficient. Nine months ago, Verizon and Pando Networks stepped up to create the Peer 4 Peer
working group, which is trying to create a standardized protocol through which P2P firms and ISPs
could work together. The idea was that sharing an ISP’s network topology would help P2P companies
route traffic in ways that are advantageous to both the ISP and the end user. Results included a 235
percent increase in delivery speeds in the U.S. and keeping more traffic inside an ISP’s own network.

24
Refresh The Net: Why The Internet Needs A Makeover?
Does the Internet Need More Roads or
Better Traffic Signals? (con’t)

The other way to reduce traffic involves each P2P company making tweaks to their software. In October
of 2007, BitTorrent launched a function called BitTorrent DNA that recognizes when a network point
is too congested and shunts the traffic flow through different areas. Jay Monahan, general counsel
for Vuze, says his P2P
company started paying
There are ways to prevent network congestion more attention to congestion
that don’t involve kicking certain cars off within the last few months
the road. as well.

At some point new roads will have to be built. But in the meantime, there are ways to prevent network
congestion that don’t involve kicking certain cars off the road.

25
Refresh The Net: Why The Internet Needs A Makeover?
“Smart News, Smart Analysis”

Long gone are the days when we caught up with world events by reading a newspaper at breakfast or turned
on the TV to watch our favorite news anchor.

We are now in an information economy which is creating an ocean of data. Your father’s media is simply not
cutting it anymore. We need relevant information, analysis and insight to keep ahead of the demand curve and
ensure success. The new world also requires us to both consume and participate in the news, simultaneously
devouring, sharing and shaping the flow for our own needs.

Giga Omni Media’s publications are recognized by many independent sources as one of the largest and most
influential in the technology and business industries. Our publications are among the leading daily online news
reads for the key influencers in the emerging technology market place. We deliver technology news, analysis and
opinions on topics of interest to knowledge workers ranging from internet infrastructure and open source software
to online video and cleantech. Founded in 2006, we now serve a monthly global audience of over 1.75 million
consumers and professionals interested in the latest news in the world of high-tech.

Built by experienced journalists, the Giga Omni Media team spots the trends and applies a professional journalistic
perspective to provide the reader with a definite point of view. Giga Omni Media reports the news and makes
the audience smarter through informed analysis. Our unique combination of in-depth reporting, editorial articles,
opinion polls, and market metrics help us to highlight the most interesting startups, trends, products, and people
in technology. Giga Omni Media fosters a community with its readers and engages with them in a dialogue about
where technology is heading both online and in person at its live events.

Giga Omni Media’s portfolio of publications has received multiple accolades and awards:

• CNET: 100 Most Influential Blogs


• Business Week: Best of the Web for Tech News in 2006 and 2007
• PC Magazine: 100 Favorite Blogs
• Technorati: Top 50 Blogs
• Forbes: The Web Celeb 25 in 2007 and 2008
• Brodeur & Marketwire: Named GigaOM as the most credible technology blog

26
Refresh The Net: Why The Internet Needs A Makeover?
The premier destination site for technology industry insiders and its movers and shakers.
Written by Om Malik, a well known, highly respected tech industry journalist, GigaOM.com is widely considered the
authoritative tech blog site for discovering what’s new, relevant and interesting in the dynamic world of technology. This
popular website covers broad-band, VoIP, IPTV, Wireless and mobile, venture capital and other new technologies — with
signature intelligence, candor and irreverence.

AUDIENCE
• 76% age 18-39 • 38% Executive (director and above)
• 68% HHI above $75k • 53% are IT professionals, engineers, developers/ISVs

Television Reinvented
The opportunity to watch, make, mash up and share online video is changing the nature of entertainment. And as evidenced
by the growing number of online video-related startups going from little more than ideas to the targets of multimillion-dollar
acquisition offers in just a few short years, the transmission of video is changing the landscape of the Internet. NewTeeVee
reports on the business of online video, monitoring the spread of broadband as it tears down traditional hierarchies that
obstruct content creators from reaching their audiences. We cover everything from content delivery networks to next-
generation media players, stupid cat videos to independent filmmakers, venture-backed content startups to media
companies testing the online waters. Initially an industry insider publication, our audience is rapidly expanding as online
video goes mainstream.

AUDIENCE
• Traditional media companies • Investors
• Content distribution networks • Content creators
• Digital media entrepreneurs • On-screen talent
• Advertising agencies • Fans of online video

27
Refresh The Net: Why The Internet Needs A Makeover?
Find More Success with the Web
The fastest-growing category of today’s workforce is the knowledge worker, a trend that’s predicted to continue unabated
for the next 30 years as more economies become information-driven. To fuel that, a generation of professional web-based
workers has emerged; WebWorkerDaily helps them to become more efficient, productive, successful and satisfied. The
site provides a saber to hack through the ever-growing mountain of information and schedule distractions that conspire
to clog up web workers’ time; it also provides hands-on reviews and practical analysis of the tools found on the new and
emerging web. WebWorkerDaily’s team of writers have built successful careers in non-traditional settings, each day they
share their practical, resourceful and inspiring secrets with readers.

AUDIENCE
• Mobile workers • Distributed project teams
• Independent consultants • Developers
• Small business owners • IT managers

Calling All Ecopreneurs


The threat of global warming has inspired a new wave of entrepreneurs and innovators to develop technology that
could ultimately save our planet. Earth2Tech.com is a news-based web site that chronicles these cutting-edge clean
technology startups and their innovations, be they based on solar, biofuels, wind, energy efficiency, green IT, water or
other materials. Earth2Tech keeps all members of the eco-ecosystem — from entrepreneurs and investors to students,
researchers and policymakers — informed.

AUDIENCE
• Investors • Policymakers
• Entrepreneurs • Scientists and researchers
• Cleantech startup executives • Green-leaning consumers
• Tech companies with eco-initiatives • Cleantech lawyers, media representatives,
analysts and journalists

28
Refresh The Net: Why The Internet Needs A Makeover?
Open Source: Find. Evaluate. Collaborate.
There are hundreds of thousands of great open-source, proprietary and web-based applications to choose from today;
finding the right one is hard. So in March 2008, Giga Omni Media launched OStatic, a site that delivers a comprehensive
repository of open-source applications and a set of tools that allow users to find them, evaluate them and collaborate
on them more effectively. OStatic combines Giga Omni Media’s insightful and in-depth reporting with cutting-edge
community tools to bring better information, case studies and context to users interested in open-source software solutions.

AUDIENCE
• Tech-savvy individuals • IT executives
• Hackers • C-level executives
• Developers • Startups
• System administrators • Aspiring founders
• Business managers

CONTACT GIGA OMNI MEDIA:

Sponsor Inquiries
sponsors@gigaom.com

Events and PR Inquiries


program@gigaom.com

Editorial Inquiries
info@gigaom.com

29

You might also like