Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 29

This study explores an under-studied layer of Chinese Internet

censorship: how Chinese Internet companies censor user–generated


content, usually by deleting it or preventing its publication. Systematic
testing of Chinese blog service providers reveals that domestic
censorship is very decentralized with wide variation from company to
company. Test results also showed that a great deal of politically
sensitive material survives in the Chinese blogosphere, and that chances
for its survival can likely be improved with knowledge and strategy. The
study concludes that choices and actions by private individuals and
companies can have a significant impact on the overall balance of
freedom and control in the Chinese blogosphere.

Contents

1. Introduction
2. Background and research questions
3. Testing methodology
4. Findings
5. Conclusions

1. Introduction

By mid–2008, 253 million Chinese had gone online, surpassing the


United States to become the country with the most Internet users
(China Internet Network Information Center [CNNIC], 2008). By the end
of 2007 there were 47 million bloggers in China with 72 million blogs,
according to official statistics (CNNIC, 2007). By the end of 2008
Chinese Internet users were found to be spending more time online than
Internet users in any other country. They were also found to be more
likely to contribute to various kinds of online social networking sites —
blogs, forums, chatrooms, photo or video–sharing Web sites, etc. —
than people in all other countries surveyed except Korea and France
(TNS Global Interactive, 2008). These statistics help to explain why the
Internet has become the front–line battleground in China’s new
“informational politics” (Yang, 2008).

In late 2007 Cai Mingzhao, a vice minister in charge of government


information policy and regulation, emphasized that the Chinese online
media of all forms must “have a firm grasp of correct guidance, creating
a favorable online opinion environment for the building of a harmonious
society” (Bandurski, 2007). Thousands of “Internet police,” deployed in
many cities, are only one weapon used by the Chinese government in its
battle the Chinese people’s hearts and minds (Xiao, 2007). Tens of
thousands of overseas Web sites are blocked to users of domestic
Chinese Internet services. Private citizens in every city and province are
enlisted as volunteers or paid commentators to “guide” online
conversations in a pro–government direction or to act as watchdogs,
reporting anti–government conversations to the authorities (Bandurski,
2008b). Managers and employees of Internet companies — both
domestic and foreign — are also expected to do their part in preventing
China’s online discourse from getting out of hand (Pan, 2006a).

Chinese Internet censorship has received a great deal of attention by


the international media, Western governments, international human
rights activist community, and increasingly by the academy. Most of that
attention, however, has focused on one part of China’s Internet
censorship system: filtering. Internet filtering is the process by which
users accessing the Internet from a particular network are blocked from
visiting certain Web sites. Filtering can be done at various levels: the
household; local business or residential networks; Internet service
providers (ISPs); or, at the regional network or national gateway level.
The same filtering techniques — and often the same software — are
used by Western corporate offices, schools, and by a range of
governments (Villeneuve, 2006). In China, filtering is achieved by
plugging “blacklisted” Web site addresses and keywords into the routers
and software systems controlling Internet traffic across Chinese
domestic networks and at the gateway points through which information
travels between the domestic Chinese Internet and the global Internet
(Clayton, et al., 2006). China’s filtering system is widely known as the
“Great Firewall of China” — a phrase coined informally by bloggers and
Internet users. The international media often inaccurately conflates the
“Great Firewall” with China’s official Golden Shield Project: a much
broader project focused on surveillance, data mining and the upgrading
of Internal public security networks, of which Internet filtering is only a
very small part (August, 2007; Fallows, 2008; Walton, 2001; Klein,
2008). Filtering is the Chinese government’s primary method of blocking
access to sensitive content hosted on overseas Web sites.

While a 2005 study by the Open Net Initiative found China’s Internet
filtering system to be “the most sophisticated in the world” (OpenNet
Initiative, 2005), this layer of censorship is imperfect because the
“Great Firewall” can be “scaled” by proxy servers, secure tunneling, and
other circumvention methods. As a result, the U.S. government and
many non–governmental organizations hoping to promote free speech in
China have invested substantial resources into the dissemination of
information about circumvention tools to Chinese Internet users (Ha,
2006). Circumvention on its own, however, cannot solve the whole
problem of access to information or open the door to a truly free public
discourse in China. It does nothing to address the separate problem
of domestic Internet censorship. For content on Web sites hosted on
computer servers inside China, circumvention tools are irrelevant
because content has been deleted or prevented from existing.

Lokman Tsui (2008) has suggested that the focus on the “Great
Firewall” and Internet filtering by Western scholars, policymakers,
media, and activists is due to their misguided tendency to view China
through “Iron Curtain” Cold War–era paradigms. Others have pointed
out that filtering is the one aspect of Chinese Internet censorship which
foreigners visiting China are most likely to encounter personally, while
censorship affecting the domestic Chinese–language Internet is much
less apparent to visitors and outsiders. Furthermore, overseas–based
free speech activists themselves — such as the Radio Free Asia, Human
Rights Watch, Reporters Without Borders — have all had their own Web
sites blocked by Chinese Internet filtering, making filtering the most
immediate concern for them and their causes (MacKinnon 2008c).
Whatever the reason for the focus on filtering and lack of attention to
other kinds of censorship, this paper aims to help address the imbalance
by shedding light on another part of China’s Internet censorship system:
The process of domestic Web site censorship by which domestically
hosted content is deleted completely or prevented from being published
in the first place. The whole process is carried out almost entirely by
employees of Internet companies, not by “Internet police” or other
government officials. This study focuses specifically on one small piece
of this domestic censorship system: How blog service providers (BSPs)
censor blogs written by their Chinese users.

 
2. Background and research questions

Blogging gained critical mass in China in 2004 and 2005. Because all the
major international BSPs (Blogspot, Typepad, Wordpress.com, etc.)
were blocked by the “Great Firewall,” most Chinese bloggers — who like
most bloggers worldwide are not technically skilled enough to arrange
their own Web–hosting and install their own blogging software — were
forced to publish on domestic Chinese BSPs. By 2006, concerned about
the role played by blogs, chatrooms and forums in the 2005 anti–
Japanese street protests, authorities had created a system of
regulations and obligatory “self discipline” pledges in hopes of
compelling Web companies to keep user–generated content from going
beyond certain limits (MacKinnon 2008a).

All companies running Web sites in China — portals, search engines,


social networking services, chatrooms, forums, blogs, video– or photo–
sharing Web sites, etc. — are now required to comply with government
censorship demands in order to keep their business licenses (Reporters
Without Borders, 2007). Politically sensitive content is deleted from the
Web by company employees, or by computer programs written by
company employees, either in response to official directives or often
simply in anticipation of trouble (Weiquan Wang [Chinese Human Rights
Defenders], 2008; Reporters Without Borders, 2007). In 2007 one
Chinese blogger even tried to sue his BSP for censoring content, which
he argued was not illegal – to no avail. The suit was thrown out of court
(Olesen, 2007; Dickie, 2007).

Web sites hosted outside of China, containing Chinese–language content


targeted at a mainland Chinese audience, are asked to prevent the
publication of certain politically sensitive content, or face the possibility
of being blocked. While some foreign companies have opted not to
comply and thus forego Chinese market opportunities, others have
complied to varying degrees. Google, Yahoo! and Microsoft all offer
censored versions of their search engines to the Chinese market in order
to maintain good government relations and their business operations in
China (Human Rights Watch, 2006; MacKinnon, 2008b).

Recent studies of domestic Chinese search engines and foreign–branded


search engines serving the Chinese market (specifically Baidu, Yahoo!
China, MSN, and Google.cn) conducted by Human Rights Watch and the
University of Toronto’s Nart Villeneuve reveal a great deal of variation
from company to company in both the extent and the methods of
censorship (Human Rights Watch, 2006; Villeneuve, 2008). These
studies confirm anecdotal reports that while government regulatory
bodies issue directives to companies about what kinds of content should
be controlled, the finer details of implementation are left to the
companies (Reporters Without Borders, 2007). The regulatory goal does
not appear to be 100 percent deletion of all information or opinions
portraying various parts of the government and Chinese Communist
Party (CCP) in a negative light — a goal that is in any case unachievable
on the Internet (MacKinnon, 2008a). Rather, rewards and punishments
are meted out based on the extent to which Internet companies
successfully prevent groundswells of public conversation around
politically inflammatory topics that might inspire a critical mass of
people to challenge Communist Party authority (Weiquan Wang, 2008;
Reporters Without Borders, 2007).

From time to time, “keyword lists” maintained by companies — lists of


sensitive words or phrases that depending on the circumstances may
trigger removal of just the words themselves, censorship of entire
postings containing those words, or as part of an alert system for
human editors tasked with moderating large amounts of content — are
leaked to the public (Xiao, 2004; Pan, 2006b; Weiquan Wang, 2008).
These lists are not given directly to the companies by government
regulators; rather they are developed by the companies themselves and
sometimes shared within the industry, in reaction to on–going
advisories, warnings, and complaints from regulators (Reporters Without
Borders, 2007; Weiquan Wang, 2008). In spite of this censorship, recent
quantitative research by Chinese media scholar Ashley Esraey,
comparing material published in the Chinese blogosphere to Chinese
newspaper content, reveals that the blogosphere is a much more
freewheeling space than the mainstream media.

“Compared to the content of mainstream,


traditional media, blogs are much more
likely to contain opposing perspectives and
criticism of the state.”

Also notable “was the absence of regime propaganda in the


blogosphere.” [1] In order to evade the BSP’s internal censors, Chinese
bloggers frequently deploy satire, euphemisms, literary allusions, vague
or coded phrases, and even graphics to convey critical messages
(Esarey and Xiao, 2008). Chinese bloggers who write about current
events and public affairs are also inclined to be realistic and pragmatic
when deciding what to write, making decisions about what kinds of
postings would bring too much trouble and aren’t worth the risk (Pan,
2006a).

As of this writing, no systematic testing of BSP censorship appears to


have been conducted in China or anywhere else. This paper is a first
attempt to fill in that blank space. By testing fifteen different BSPs
serving the Chinese market, our research team sought to answer three
questions: How great is the variation in the quantity of material
censored by different Chinese BSPs? To what extent are conversations
about politically sensitive subjects able to survive in the Chinese
blogosphere in spite of censorship? How — and to what extent — do
censorship methods vary from company to company?

Our tests yielded some interesting answers: First, censorship levels


across 15 different BSPs varied even more than expected. Second, a
great deal of politically sensitive material survives in the Chinese
blogosphere, and chances for survival can likely be improved with
knowledge and strategy. Third, censorship methods vary greatly from
company to company, implying that companies do have at least some
ability to make strategic choices. These choices are not only about how
to balance relationships with government and users, but also about the
extent to which BSPs value user rights and interests.

3. Testing methodology

Since no similar testing of blog censorship appears to have been done in


China or anywhere else, our research team invented a new methodology
through trial and error. Hopefully our methods will be refined and
improved upon as more research is done on how user–generated
content is censored by Internet companies.

The testing of Internet filtering at the Internet service provider (ISP)


and network level — such as the filtering tests conducted by the
OpenNet Initiative in China and around the world — is a largely
automated process relying heavily on network interrogation by special
software applications. Results are analyzed with the help of computer
programs. While the initial list of URLs and keywords for testing must be
selected by Chinese speakers with up–to–date knowledge of China’s
political and social situation, the actual testing can be carried out by
computers and the analysis can be done largely by automated processes
and non–Chinese speakers (Open Net Initiative, 2005). Testing of
censorship by blog hosting services, however, is a different matter. This
kind of testing can only be done by people with a high level of Chinese
reading comprehension, in addition to a good understanding of China’s
contemporary socio–political developments. Our team found no way to
automate the process, making it very time–consuming.

After roughly two months of experimentation, refinement, and


discussion, our team devised a time–consuming methodology that we
followed over an eight–month testing period, from early February 2008
till late September 2008. Tests were conducted intensively in February–
April and July–September [2]. The first testing period included the mid–
March Tibet demonstrations, crackdown, and aftermath. The second
testing period covered the run–up and aftermath of the Olympics as well
as the beginning of the “tainted milk scandal” in which a vast amount of
Chinese milk products were found tainted with the chemical melamine.

Anonymous author accounts were created on 15 different commercial


BSPs catering to mainland Chinese bloggers. In alphabetical order they
are: Baidu, Blogbus, BlogCN, iFeng, Mop, MSN Live, MySpace, Netease,
QZone, Sina, Sohu, Tianya, Tom, Yahoo! China, YCool. Passages of one–
three paragraphs from a variety of sources were posted across all 15
BSPs.

After consultation with Chinese journalists, bloggers, and media experts,


we designated 50 subject categories that merited testing (See Table
1 for a breakdown of those categories). Throughout the testing period,
articles or excerpts of articles were selected related to these 50 subject
categories that had been published online in a wide variety of sources:
blogs, forums, overseas dissident Web sites, mainstream Chinese news
sites (such as sina.com, sohu.com, and xinhuanet.com — the Web site
of the official government mouthpiece, the Xinhua News Agency), and
overseas Chinese-language news sites (such as the Financial Times
Chinese site (http://www.ftchinese.com/), the BBC Chinese Web site
(http://www.bbc.co.uk/chinese/), Reuters Chinese
(http://cn.reuters.com/), and the Wall Street Journal
Chinese (http://chinese.wsj.com/). While we tested one article excerpt
about the banned religious group Falun Gong and one article excerpt
related to the 1989 Tiananmen Square crackdown in order to observe
how the different Web services handle the most sensitive subjects, these
subjects were not the focus of this study. Chinese bloggers inside China
generally expect those subjects to be censored and reprisals for
discussing these topics openly online are also widely feared. While this
reality is a serious affront to Chinese bloggers’ freedom of speech, it is
already well known that discussion of these subjects on the mainland
Chinese internet is not possible (Sabbagh, 2006). The aim of this study
was to focus on subjects bloggers inside China are more likely to be
trying to write about, and thus to get a better sense of where the
boundaries lie and how far they can be pushed.

After selecting a piece of content for testing, I posted it on an internal


password–protected Web site specifically set up for censorship test
management. The excerpt or “content unit” selected for testing was
given a unique number and posted into the Web site, along with the URL
of the original article, followed by the full text of that article in case it
failed to remain at the original Web site. Members of our testing team
would then follow a set procedure for each content unit, testing that
content on each of the 15 BSPs in the following manner:

 Log into the blog of one BSP, copy and paste the content unit into
the “back end” edit window of the blog; take a screenshot;
 Hit “publish.” If the content was blocked from publication, or “held
for moderation” take a screenshot of what kind of error message or
other message appeared;
 If the content was not blocked from publication, take a screenshot
of what the blog post looks like when the author is logged in to the
system;
 Log out and check whether the content is still visible to the public,
not just the logged–in author. Take a screenshot;
 Check back 24–48 hours later to see if the blog post is still visible.
Take a screenshot showing either the still visible post, or the error
message saying “this page does not exist,” or whatever else can be
seen;
 Access the the blog post on a mainland Chinese ISP to see
whether it is accessible (i.e., not filtered) on at least one Chinese
domestic ISP [3]. Take a screenshot. (This step was primarily a way to
check whether any of the services were geo–filtering their material as
will be discussed further in the “Findings” section.)
 Upload all of these screenshots into a database according to the
unique number assigned to the content unit, along with descriptive
comments noting any interesting or unusual circumstances surrounding
the situations in which censorship occurred.

This testing process was highly labor–intensive and time–consuming;


one initial test “round,” not including the follow–up checks, would
usually take an experienced tester approximately three–four hours. Test
rounds that could not be completed in under a week were thrown out as
invalid.
Overall 124 tests were conducted, with 16 thrown out as invalid due to
tester error or inconclusive results, leaving 108 valid. At least one item
was tested per subject category, with greater emphasis placed on
subjects related to current events. Some items were tagged with more
than one category since we were testing excerpts of articles and blog
posts whose subjects often didn’t fall neatly into a single category.
Articles or excerpts of articles were used rather than simple sentences
or keywords because censorship conducted by at least some BSPs is not
an entirely automated process, involving manual checking by a staff
member to determine the context in which certain sensitive phrases or
keywords are used. Greatest emphasis was placed on “sudden incidents”
— a Chinese euphemism for breaking news stories of a sensitive nature
such as a demonstration, riot, act of violence, or manmade disaster —
because bloggers who are interested in public affairs tend to be most
interested in current events and breaking news, and breaking news
(along with the spikes of conversation that develop around some
breaking news topics) is also of greatest interest to government
regulators. Altogether, 23 of the test items were tagged as “sudden
incidents.” Ten tests related to Tibet in some way and 16 tests related to
the Olympics. Corruption–related subjects were also emphasized, with
15 tests.

4. Findings

Finding 1: The extent of censorship by each BSP varies


drastically.

A certain amount of variation in censorship levels from BSP to BSP was


expected, based on casual observations and reports of bloggers’
experiences with censorship that inspired this study in the first place.
We did not, however, expect the drastic variation that we found.

As can be seen in Figure 1, out of 108 valid tests conducted across 15


blog hosting services, the most vigilant company censored 60 blog posts
(56 percent of the total). The second most vigorous company censored
44 (41 percent), and the third most vigorous censored 34 (32 percent).
At the other end of the scale, the least vigorous blog host censored only
one piece of content, the second most liberal censored only three, and
the third most liberal censored nine.
 

Figure 1: Instances of censorship


on 15 different Blog Service
Providers (BSPs)
(company names omitted due to
concern for government
repercussions).

The names of the BSPs have not been published alongside these
aggregate results. People who work in the Chinese Internet sector have
expressed strong concern that published results of an independent
academic study showing who censors more than whom could be used as
a tool for reward and retribution by regulating authorities. The purpose
of this study is to increase global understanding of how Chinese BSP
censorship works. Naming company names alongside aggregate
censorship results would certainly enable deeper examination of why
certain companies censor a great deal more than others. However, this
benefit is outweighed by the costs, not only to individuals working in
Chinese Internet companies, but also to companies that are clearly
trying to respect their users’ interests and rights over regulatory
demands to the extent possible while not losing their business licenses.

Even without naming companies along with their censorship rankings,


the results are revealing. The wide variation in levels of censorship
confirms that censorship of Chinese user–generated content is highly
decentralized, and that implementation is left to the Web companies
themselves. Thus it has so far been possible for at least some Chinese
Web companies (even some large, popular brands) who run user–
generated content platforms to remain in business despite inconsistent
and patchy levels of censorship.

Finding 2: Some politically sensitive material can survive in the


Chinese blogosphere.

A substantial amount of politically critical content survives in the


Chinese blogosphere. Not a single content item was censored by all 15
BSPs. Even the item about Falun Gong was censored by only 13 BSPs —
the highest number of BSPs to censor the same item. An item on Tibet
independence was censored by 12 BSPs. An excerpt of the Dalai Lama’s
open letter to the Chinese people and an item about the “Tiananmen
mothers” Web site were both censored by nine BSPs. An item about a
riot in Weng’an county, Guizhou province was censored by seven BSPs.
A commentary mocking China’s legislature, calling it an entertainment
show, was censored by four BSPs. An item on the growing gap between
rich and poor in the Chinese countryside was censored by two BSPs.

Table 1 breaks down how many content units tested were censored by
how many BSPs. Of 108 valid tests, 21 content items were censored by
none of the 15 BSPs. Twenty–four content items were censored by only
one BSP [4].

Table 1: Tests broken down by


number of censored results (15 BSPs
tested).
Same item censored by 0 of 15
21 tests
BSPs
Same item censored by 1 of 15
24 tests
BSP
Same item censored by 2 of 15
15 tests
BSPs
Same item censored by 3 of 15
6 tests
BSPs
Same item censored by 4 of 15
13 tests
BSPs
Same item censored by 5 of 15
4 tests
BSPs
Same item censored by 6 of 15
8 tests
BSPs
Same item censored by 7 of 15
6 tests
BSPs
Same item censored by 8 of 15
3 tests
BSPs
Same item censored by 9 of 15
3 tests
BSPs
Same item censored by 10 of 15
3 tests
BSPs
Same item censored by 11 of 15
0 test
BSPs
Same item censored by 12 of 15
1 test
BSPs
Same item censored by 13 of 15
1 test
BSPs

Automated keyword–based censorship systems sometimes over–


compensated, causing a Xinhua News Agency article about President Hu
Jintao’s inspection tour of coal–fired power plants to be censored by four
BSPs [5].

As mentioned in my earlier description of our testing methodology,


content items — eventually posted on 15 BSPs — were catalogued into
50 subject categories (see Table 2).

Table 2: 50 subject categories.


11. 21.
1.
Foreig Maca 31.
Sudden 41. natural
n u local
incident disaster
trade/i politic leaders
s
nvest s
2. 12.
oversea finance 32. 42.
22.
s and religio economic
AIDS
political econo n measures
events my
13.
proble
3. ms 23. 33. 43.
Olympic inside healt minorit dissidents
s govt. h ies (not jailed)
ministr
ies
34.
4. 14. 44.
24. politica
historica corrupt censorship/
crime l
l issues ion surveillance
reform
5. leftist 15. 25. 35. 45.
critiques relocat city “rights opposition
govt.
defens
ion polici parties
e”
es
36.
26.
critiqu
provi
6. 16. es of 46. National
ncial
military/ enviro govt. People’s
govt.
security nment human Congress
polici
rights
es
policy
27.
17. 37.
7. natl.
Three politica 47. labor
foreign govt.
Gorges l issues
policy polici
Dam arrests
es
28. 38.
8. anti– medi indepe
18. HK 48. migrant
Japanes a/tec ndence
politics workers
e h move
policy ments
29.
19. natio 39. 49.
9. anti–
Taiwan nal regime economic
U.S.
politics leade change disparity
rs
30.
provi
20.
10. N. ncial 40.
Taiwan
Korean or Falun 50. Tibet
indepe
refugees city Gong
ndence
leade
rs

Items tagged with “crime,” “foreign trade and investment,” and “anti–
Japanese” were not censored by any BSP. Posts related to Falun Gong,
the 1989 Tiananmen crackdown, and Tibet independence were heavily
censored, as expected. Items labeled with the following subject
categories also received heavy censorship (more than 10 instances in
which a post with one of these labels was censored): sudden incidents,
Olympics, Tibet, corruption, dissidents, ethnic minorities, national
leaders, political arrests, labor issues, censorship/surveillance, critiques
of government human rights practices, independence movements,
involuntary relocation, “rights defending” cases, historical issues, and
misc. religious matters. Contrary to expectation, content items tagged
“Taiwan politics” and even “Taiwan independence” were lightly censored
— perhaps due to the fact that Taiwan elected a more moderate
president in the spring of 2008 as well as the fact that pro–Taiwan
independence politicians were plagued by corruption scandals, with both
developments being reported openly in the mainland Chinese media.

In our effort to discover where the censorship fault lines lie, testing
emphasized topics related to controversial current events and breaking
news. Of the 23 tests of content related to “sudden incidents,” subjects
included: developments in the poisoned milk powder case, local protests
or clashes with police (including Tibet but also in other places),
explosions in Xinjiang, the knifing of tourists in Beijing during the
Olympics, natural or man–made disasters. Two BSPs censored 14 of 23
posts tagged with “sudden incidents,” and most censored at least five
posts, although one BSP censored only one and one BSP censored none
at all. Censorship was very inconsistent in terms of which BSPs censored
what content. For example: four items related to the melamine poisoned
milk powder scandal were tested. Two of the items were censored by
two BSPs, one was censored by three BSPs, and one was censored by
four BSPs. One BSP (iFeng) censored all four items; one BSP (Tianya)
censored two of the four items (including a report which originated from
the Web site of China’s national state–run television, about a visit by
Premier Wen Jiabao to a store selling milk powder); Sina, MSN, Blogbus,
Mop and Myspace censored one each.

Below are four more examples of just how inconsistently certain hot
topics were handled across different BSPs:

Tibet: Ten tests contained Tibet–related


content. Thirteen of the 15 BSPs censored
at least two Tibet–related items. The most
vigorous BSP censored eight out of 10
posts. Two BSPs censoring no Tibet–
related items tested (See Figure 3 below).

Olympics: Sixteen tests contained


Olympics–related content (including
discussion of a Chinese female gymnast’s
age, pressures felt by athlete Liu Qiang,
lip–synching at the opening ceremonies,
arrest of old ladies who applied to protest,
etc.). The most vigorous BSP censored
eight, while four censored none, and three
censored only one of the 16 posts
(See Figure 2 below).

Corruption: Fifteen tests contained


material about corruption at various levels
of government. The most vigorous BSP
censored nine entries; one BSP censored
none; all others censored between one–
four posts (See Figure 2 below).

Dissidents: Five tests contained material


about political dissidents. All services
censored at least one post; the most
vigorous censored four posts (See Figure
2 below).

Figure 2: Censorship of different


kinds of content.

Some content directly opposing Communist Party rule was published


successfully on all fifteen BSPs. One example was an excerpt of a hard–
hitting essay by Bao Tong, former secretary to the late Zhao Ziyang who
was deposed in 1989, in which he argued that the Olympics were being
used to bolster the one–party state they would not bring China any
closer to democracy as many foreigners had hoped. While Bao’s name
was removed from the excerpt for testing purposes (to see whether the
language itself rather than the author’s notoriety would trigger
censorship), it contained references to democracy and the Universal
Declaration of Human Rights, and criticized the single–party regime
throughout. Another example was an excerpt essay by the Guangzhou–
based blogger Yang Hengjun discussing why a multi–party political
system with regular democratic elections is preferable to China’s current
political system. None of the 15 BSPs tested censored either of those
two posts.

Findng 3: There is wide variation in censorship methods.

Censorship methods vary substantially. What’s more, sometimes


companies use different methods depending on the specific nature of
the content. Some companies also choose to be more transparent or
honest than others when communicating with users whose blog posts
have been censored. The basic censorship methods can be categorized
as follows:

1. Tester is prevented from posting at all — Upon clicking “publish,”


the tester is presented with an error message of some form, with
varying degrees of explanation but usually implying that the content was
sensitive in some way (See Figure 3). Details are never given, providing
an explanation as to what exactly the offending content was or why it
was un–publishable. Industry sources have confirmed that posts
censored in this way are blocked via an automated system triggered by
keywords, phrases, or even whole passages that are plugged into the
system by administrators. This method was used at least once by 11 of
15 BSPs: Baidu, BlogCN, Mop, iFeng, Myspace, Netease, QZone, Sina,
Tianya, Tom, and Yahoo China.

Figure 3: Screenshot of Baidu


error message:
“Sorry, publication of your article
has failed. Your article contains
inappropriate content, please
check.”
The article is about a clash
between police and residents of a
small city in Yunnan province,
taken from the official Xinhua
News Agency.

2. Post is “held for moderation” — Upon clicking “publish,” the tester


was presented with a message indicating that the content is being held
for approval, apparently the result of an automated process triggered by
the use of keywords. This often happened on the same services that
have also prevented publication of other posts, indicating that some
services categorize different types of content at different sensitivity
levels, to be handled differently. In some cases the content held for
moderation was eventually published, indicating that a human being
reviewed it and determined that the content was acceptable. In other
cases the content was “held for moderation” indefinitely (See Figures
4a and b). This method was used at least once by 10 of 15 BSPs: Baidu,
BlogCN, iFeng, Mop, Netease, QZone, Sina, Tianya, Tom, and Yahoo!
China.

Figure 4a: Screenshot of iFeng


notice (on article advocating
Maoism):
“Your blog article has been
submitted, it needs approval
before appearing, thank you.”

Figure 4b: Screenshot of iFeng


front page private view (when
author is logged in).
Headline appears with a note
alongside it: “Sorry, this blog
article is undergoing approval..”
Headline and note don’t appear in
public view (when author is logged
out). Post remains in moderation
forever; never appears in public
view.

3. Post is published in “private view,” but is never visible to the


public. The tester is able to publish the post, and it is visible on the
blog’s front page but only to the author when the author is logged in to
the system. The post cannot be seen when the author logs out of the
system and it is not publicly visible. This technique was used less
frequently, at least once by three of the 15 BSPs: Mop, Netease, and
Tom.

4. Post is successfully published at first, but deleted or


“unpublished” some time later — usually within approximately 24
hours, although over weekends it could sometimes take as long as two
days before a blog post would be taken down (See Figures 5a and b).
Industry sources have confirmed that in these cases the content is
flagged by the internal software system due to the presence of
keywords; it is then reviewed by someone who then decides whether to
remove, or un–publish, the post in question. This method was utilized at
least once by 10 of the 15 BSPs: Blogbus, BlogCN, Mop, MySpace,
QZone, Sohu, Sina, Tianya, Yahoo! China, and YCool.

Figure 5a: Screenshot of blog


post on Sina.com.
Article excerpt about explosion in
Xinjiang province is successfully
published in public view.

Figure 5b: 24 hours later, this


error message appears at the
same URL.

5. Sensitive keywords or phrases are replaced with “***” but the


post is otherwise published — This method of censorship, an
automated process triggered by keywords, was observed on only two of
the 15 blog hosting services: Blogbus (its main form of censorship in
most though not all cases), and Yahoo China (which utilized this method
in addition to several of the others above). For example: on Blogbus,
the name of President Hu Jintao and mentions of the Tibetan
independence movement were replaced by asterisks, even when the
text we tested was copied from Xinhua News Agency propaganda
articles (See Figure 6). While replacement of sensitive keywords with
asterisks is Blogbus’ primary means of censorship, it does occasionally
remove entire posts after publication.

Figure 6: Blogbus replaces some


of the characters from the phrase
“Tibet independence” with “***”.
Article is excerpted from BBC
Chinese, about how some foreign
Tibet independence protesters
were kicked out of China.

6. The content is successfully published, but blocked to viewers


attempting to read it from inside mainland China. Only Windows
Live, the blog hosting service run by Microsoft, utilized this method
(See Figure 7a and b). This method is only possible when the content
itself is hosted on servers outside China.

Figure 7a: Excerpt of an article


about the poisoned milk powder
scandal, publicly visible on Hong
Kong ISP.

Figure 7b: “Connection close”


message appears when attempting
to access the same URL from a
Mainland Chinese ISP.

Microsoft’s approach to geographically targeted censorship without


actually removing content on Windows Live is the result of controversy
over the company’s deletion of a popular Chinese blog in late 2005.
After facing substantial international criticism by human rights groups,
the press, and the U.S. Congress, Microsoft agreed with critics that
outright deletion of content — rendering it inaccessible to global
audiences, not just Chinese audiences — was not an acceptable practice
for a global blog hosting platform. On the other hand, Microsoft
executives in China had been informed that the service would be
blocked on Chinese ISPs if they took a zero–censorship approach to
politically sensitive content. The result was a compromise: objectionable
content would be blocked to visitors using mainland Chinese Internet
connections, but would remain published and visible everywhere else
(MacKinnon, 2008a).

Some BSPs used primarily one method, but most services utilized a
combination of different methods: blocking at publication stage, holding
for moderation, or removal after publication. This indicates that many
BSPs have set up complex systems in which different sets of keywords
fall into different categories of sensitivity.

Different services also chose to explain — or not explain — censorship to


their users in different ways. Nearly all BSPs censored an item about the
“Tiananmen mothers,” (one of the few items censored by most but not
all services), but different methods were used. Many but not all services
blocked the content from publication, using pop–up messages similar to
the one which appeared when a tester attempted to publish an article
excerpt about the “Tiananmen mothers” on a MySpace blog: “Sorry,
your post contains inappropriate content. Please delete the
inappropriate content and repost, thanks.” QZone, on the other hand,
allowed the post to be published in “private view” (visible only to the
author when logged in) but the post is not publicly visible: in its place
appears a message: “This message is being previewed, which will take 3
working days. Once approved it will be possible to view normally.” The
post never appears.

The blogging platforms of Sina.com and Sohu.com, both major Beijing–


based Internet companies, appeared to employ a combination of internal
keyword flagging followed by human decision–making most of the time.
Politically sensitive posts that were successfully published at first
vanished from the blog within 24 hours. Interestingly, each company
handled this de–publication differently. Sina simply removed the post
without explanation. Attempts to access the post’s original URL
produced an error message saying: “Sorry the blog address you have
visited does not exist.” Sohu, on the other hand, un–pubished and
demoted the post to draft status, enabling the original content to be
retrieved by the author through the back–end author interface even
though it could no longer be publicly seen. Posts that were un–published
in this manner were also flagged with an explanatory message saying:
“This blog post has been hidden.” A box immediately below it states:
“Dear Sohu Blog Friend: Hello! We’re very sorry to inform you that this
blog post, because of certain reasons, is not suitable for publication, and
has been locked. You can access the original text and photos from this
page. Thank you for your understanding and support of Sohu. Sohu
Customer Service is available to you twenty–four hours per day at:
[provides phone number and e–mail address].” At the blog post’s
original URL, visitors are met with an error message that says: “This
blog post has been hidden.”

These examples of how Myspace, QZone, Sina and Sohu handle


censorship differently demonstrate that Chinese blog hosting companies
can and do make choices about how honestly they communicate with
users about censorship. Sohu’s honesty was noted by blogger–lawyer
Liu Xiaoyuan, who in 2007 had tried unsuccessfully to sue Sohu for
censoring content that Liu believed did not violate the terms of service.
In December 2008 Liu wrote:

“In the past I sued Sohu for deleting my


blog posts, but now I want to praise them.
Sohu is the only BSP [blog service
provider] that posts notices on my blog
saying, ‘this post has been
hidden/removed for certain reasons.’ As a
result, when web users visit my Sohu blog,
they can know that a post has been hidden
by Sohu. I think Sohu is brave to do this. I
also run blogs on Sina, ifeng, and others,
but they simply delete blog posts without
notifying my readers.” (MacKinnon, 2008d)

Thus, while Chinese companies are not able to defy government


censorship demands altogether, test results show that they have a
meaningful amount of leeway: not only in terms of how they respond to
these demands, but also how they communicate with users about
censorship of their works. It is unclear how much impact these have on
user decisions about which blog hosting platform to patronize. Since
these censorship differences are not advertised, it’s not clear whether
many users are aware of these substantial differences. It’s also
unknown what percentage of Chinese bloggers have had at least one
blog post censored. Given that Chinese Internet users consider many
factors when deciding which Web service to use (such as what their
friends are using, attractiveness and user–friendliness of the user
interface, other useful services and entertainment provided, etc.) it is
difficult to speculate on whether Chinese Internet users would gravitate
in large numbers to BSPs who have a reputation of lighter censorship
and greater transparency. Further research needs to be done.

5. Conclusions

Based on follow–up interviews with people in the Chinese Internet


industry who spoke on condition of anonymity, there seem to be a
number of reasons for the wide variation in censorship practices. Some
companies appear to be able to “get away” with less censorship than
others. According to interviewees, this is due to a number of factors: It
depends where the company is registered, because (as discussed
in Section 3) a Web company is generally answerable to regulators at
the city or provincial level in the city or province where the company is
registered. It depends on how large and successful a company is, how
much public attention its front–page portal receives, and how much
competitive pressure the company is under. It also depends on who
owns the company, who its investors are, and how much political
pressure they are feeling in relationship not only to the Web business
concerned but also other technology or media businesses they may have
at play in the Chinese market. Finally, interviewees claimed that the
amount of censorship carried out by a given BSP — and the way in
which it is implemented — depended on the character, background,
interests, and courage of individual editors hired to manage a given Web
company’s blog portal. In several cases interviewees cited editors’ and
managers’ journalism background as a factor: when asked why
company X censored its blogs significantly less than company Y, one
theory given was that the blog portal editor for company X was a
journalist while the blog portal editor for company Y had a background
in technology and business. All of these explanations, however, are only
hypotheses, requiring more systematic research in order to be proven
conclusively.

What is interesting about these explanations is that they paint a picture


of domestic Chinese Internet censorship that in many ways mirrors the
way that China’s traditional print media is controlled and regulated. It is
well documented by scholars of the Chinese news media that some
Chinese newspapers and magazines manage to “get away” with much
more hard–hitting investigative stories than others. The reasons depend
on various factors such as: where the head office of the publication or
TV station is located; who owns the media property in question; who is
the editor–in–chief, how well connected he or she is with powerful
people in the government, what are his or her personal values and
agendas, etc. (Bandurski 2008a; Hassid, 2008) In other words, despite
the authoritarian nature of the Chinese state, people working in both old
and new media are not entirely powerless: Individual choices, values,
and actions by individual journalists are proven to have an impact on
the quality and range of information and ideas available to the Chinese
people. It would appear that in China’s new media, individual values and
choices similarly help to shape the extent to which Chinese netizens are
able to engage in an informed public discourse. On the other side of the
coin, private and non–government actors collaborate with the
government to muzzle each other for various reasons of self–interest,
expediency, and even patriotism.

The findings of this study point to the need for more study — both of
Chinese domestic Internet censorship as well as censorship in other
countries. This study was highly experimental and limited in its scope,
timeframe, and resources. Larger–scale testing would help to shed
greater light on the way in which different kinds of content are
censored. This study was limited to blogs; it did not test other forms of
social networking Web sites, chatrooms or bulletin board systems (BBS),
which are extremely popular and influential on the Chinese Internet,
instant messaging, and mobile services. Surveys of Internet company
employees would help to shed better light on the reasons behind the
wide variation of censorship practices. Surveys of Chinese bloggers and
Internet users would help us to answer questions such as: How often do
average Chinese bloggers encounter censorship? What do they think
about it? How have they reacted and in what ways have they modified
their online behavior after encountering censorship? How do censorship
practices impact his or her loyalty towards a particular BSP?

There are also some global research questions: Where else in the world
is this kind of political censorship by web service companies of user–
generated content happening? Companies in the West already censor for
child porn, copyright violations and sometimes hate speech, but to what
extent are Web companies in other countries besides China
systematically complying with government demands to delete politically
sensitive material? Will the “Censorship 2.0” model — in which
governments demand censorship by Web companies — spread globally?
Given how difficult it is to carry out such censorship consistently and
effectively, and how much staff time and resources must nonetheless be
taken up in attempting to implement it, would it be in companies’
commercial interests to resist or reject government efforts to delegate
censorship to companies? Further research is needed in order to
understand the global trends and emerging practices.

Improved knowledge of China’s domestic censorship system also helps


to inform the work of activists working for greater freedom of speech,
not just in China but also around the world. When a substantial amount
of content is being deleted from the World Wide Web, promoting
circumvention — while important — does not fully address the problem
of Internet censorship. Bloggers and Internet users would benefit from
more systematic information about strategies for successfully
disseminating politically sensitive information via domestic Web sites on
which content is frequently censored. Finally, free expression advocates
should consider how an “Internet user rights” movement might push for
greater transparency and accountability by Internet companies. The
Vietnamese government has already announced that it will roll out
similar Internet industry regulations to China’s [6]. Global Internet
companies have received censorship requests from governments around
the world, including democracies such as Thailand, Turkey, and India.
Consumers will need to put greater pressure on their service providers
to resist government attempts to delegate political control and

manipulation to the private sector. 

About the author

Rebecca MacKinnon is an Assistant Professor at the Journalism and


Media Studies Centre, University of Hong Kong, where she teaches
online journalism. Her research and writing focuses on issues of online
free speech, censorship, and citizen media, with an emphasis on China.
She is co–founder of Global Voices, an international citizen media
network, and a founding member of the Global Network Initiative. She is
also a former CNN bureau chief in Beijing and Tokyo.
E–mail: rmack [at] hku [dot] hk
 

Acknowledgements

This study was made possible by a seed research grant from the
University of Hong Kong. I am also grateful for the support and
encouragement of Prof. Yuen–Ying Chan, Director of the Journalism and
Media Studies Centre.

Research assistants John G. Kennedy and Shun–yi Lai spent countless


hours conducting censorship tests. John G. Kennedy also provided useful
feedback and many insightful ideas. Ben Cheng custom–built our test–
management and database system, helped to manage the testing work,
and contributed numerous ideas on the testing methodology. Without
the hard work and dedication of these three people this project could
not have been completed.

Notes

1. Esarey, 2008, p. 10.

2. The break was caused by the primary investigator’s unavailability;


ideally testing would have been continuous throughout the February–
September.

3. This was done either via mainland Chinese proxies or Chinese Tor exit
nodes, or by people physically in mainland China when possible and
practicable. See Villeneuve (2008) for more details about use of Tor
nodes as a method of testing Chinese ISP behavior.

4. The distinction of being sole censor of one particular piece of content


was shared by eight different BSPs in different tests.

5. Leaders’ names are often included in automated censorship systems,


especially in cases where a company does not have adequate staff for
manual vetting of content before deletion.

6. “Vietnam to regulate blogging” (2 December 2008), at http://chao-


vietnam.blogspot.com/2008/12/vietnam-to-regulate-blogging.html,
accessed 26 January 2009.

References
Oliver August, 2007. “The Great Firewall: China’s Misguided — and
Futile — Attempt to Control What Happens Online,” Wired (23 October),
at http://www.wired.com/politics/security/magazine/15-
11/ff_chinafirewall, accessed 4 January 2009.

David Bandurski, 2008a. “Garden of Falsehood,” Index on Censorship,


volume 37, issue 2 (May), pp. 45–
54.http://dx.doi.org/10.1080/03064220802081571

David Bandurski, 2008b. “China’s Guerrilla War for the Web,” Far


Eastern Economic Review (July),
at http://www.feer.com/essays/2008/august/chinas-guerrilla-war-for-
the-web, accessed 4 January 2009.

David Bandurski, 2007. “State Council vice–minister reiterates control as


top priority of Internet development in China,” China Media Project (4
December), at http://cmp.hku.hk/2007/12/04/763/, accessed 4 January
2009.

China Internet Network Information Center (CNNIC), 2008. 22th


Statistical Survey Report on the Internet Development in China (July),
abridged English version
at http://www.cnnic.cn/en/index/0O/02/index.htm; full Chinese
language version at http://www.cnnic.cn/index/0E/00/11/index.htm,
accessed 4 January 2009.

China Internet Network Information Center (CNNIC), 2007. 2007 Nian


Zhongguo Boke Shichang Diaocha Baogao [2007 Report on the Chinese
blog market] (December); in Chinese only
at http://www.cnnic.cn/html/Dir/2007/12/26/4948.htm, accessed 4
January 2009.

R. Clayton, S. Murdoch, and R. Watson, 2006. “Ignoring the Great


Firewall of China,” In: G. Danezis and P. Golle (editors). Privacy
Enhancing Technologies 2006, LNCS 4258 (December) pp. 20–35.

Mure Dickie, 2007. “China traps online dissent,” Financial Times (12


November), at http://www.ft.com/cms/s/0/ef0e7d64-9138-11dc-9590-
0000779fd2ac.html, accessed 4 January 2009.

Ashley Esarey, 2008. “Political Discourse in the Chinese Blogosphere: A


Quantitative Analysis,” paper presented at the 6th Annual Chinese
Internet Research Conference (13–14 June, University of Hong Kong,
Hong Kong).
Ashley Esarey and Qiang Xiao, 2008. “Below the Radar: Political
Expression in the Chinese Blogosphere,” Asian Survey, volume 48,
number 5 (September/October), pp. 752–
772.http://dx.doi.org/10.1525/AS.2008.48.5.752

James Fallows, 2008. “The Connection Has Been


Reset,” Atlantic (March),
at http://www.theatlantic.com/doc/200803/chinese-firewall, accessed 4
January 2009.

K.O. Ha, 2006. “Piercing China’s Firewall: Hackers, Activists, Challenge


Beijing’s Internet Police,” San Jose Mercury News (2 July).

Jonathan Hassid, 2008. “Controlling the Chinese Media: An Uncertain


Business,” Asian Survey, volume 48, number 3 (May/June), pp. 414–
430.http://dx.doi.org/10.1525/as.2008.48.3.414

Human Rights Watch, 2006. Race To the Bottom: Corporate Complicity


in Chinese Internet Censorship (August),
at http://www.hrw.org/reports/2006/china0806/, retrieved 4 January
2009.

Naomi Klein, 2008. “China’s All–Seeing Eye,” Rolling Stone (28 May),


at http://www.rollingstone.com/politics/story/20797485/chinas_allseein
g_eye, retrieved 4 January 2009.

Rebecca MacKinnon, 2008a. “Flatter World and Thicker Walls? Blogs,


Censorship and Civic Discourse in China,” Public Choice, volume 134,
numbers 1–2 (January), pp. 31–46.

Rebecca MacKinnon, 2008b “Asia’s Fight For Web Rights,” Far Eastern


Economic Review (April), at http://feer.com/essays/2008/april/asias–
fight–for–web–rights, accessed 4 January 2009.

Rebecca MacKinnon, 2008c. “The Chinese Censorship Foreigners Don’t


See,” Wall Street Journal Asia (14 August),
at http://online.wsj.com/article/SB121865176983837575.html,
accessed 4 January 2009.

Rebecca MacKinnon, 2008d. “Studying Chinese Blog


Censorship,” RConversation (November),
at http://rconversation.blogs.com/rconversation/2008/11/studying–
chines.html, accessed 4 January, 2009; Original Chinese blog post by
Liu Xiaoyuan
at http://blog.sina.com.cn/s/blog_49daf0ea0100bmb0.html, accessed 4
January 2009.
Alexa Olesen, 2007. “China tightens Internet controls,” Associated Press
article appearing on MSNBC.com (14 October),
at http://www.msnbc.msn.com/id/21268635/, accessed 4 January
2009.

OpenNet Initiative, 2005. Internet Filtering in China in 2004–2005: A


Country Study,
at http://www.opennetinitiative.net/studies/china/ONI_China_Country_
Study.pdf, accessed 1 January 2009.

Phillip Pan, 2006a. “Bloggers Who Pursue Change Confront Fear And
Mistrust,” Washington Post (21 February),
at http://www.washingtonpost.com/wp–
dyn/content/article/2006/02/20/AR2006022001304.html, accessed 4
January 2009.

Phillip Pan, 2006b. “Keywords Used to Filter Web Content,” Washington


Post (18 February), at http://www.washingtonpost.com/wp–
dyn/content/article/2006/02/18/AR2006021800554.html, accessed 4
January 2009.

Reporters Without Borders, 2007. “Journey to the Heart of Internet


Censorship,” (October), at http://www.rsf.org/article.php3?
id_article=23924, accessed 4 January 2009.

Dan Sabbagh, 2006. “No Tibet or Tiananmen on Google’s Chinese


site,” Times Online (25 January),
at http://business.timesonline.co.uk/tol/business/markets/china/article7
19192.ece, accessed 4 January 2009.

TNS Global Interactive, 2008. Key Insight Report 5: Digital World,


Digital Life, Taylor Nelson Sofres, London (December),
at http://www.tnsglobal.com/news/key-insight-reports/, accessed 4
January 2009.

Lokman Tsui, 2008. “The Great Firewall as Iron Curtain 2.0: The
implications of China’s Internet most dominant metaphor for U.S.
foreign policy,” paper presented at the 6th annual Chinese Internet
Research Conference (13–14 June, University of Hong Kong, Hong
Kong), at http://jmsc.hku.hk/blogs/circ/files/2008/06/tsui_lokman.pdf,
accessed 4 January 2009.

Nart Villeneuve, 2008. “Search Monitor Project: Toward a Measure of


Transparency,” Citizen Lab Occasional Paper, number 1, University of
Toronto (June). at http://www.citizenlab.org/papers/searchmonitor.pdf,
accessed 1 January 2009.

Nart Villeneuve, 2006. “The filtering matrix: Integrated mechanisms of


information control and the demarcation of borders in cyberspace,” First
Monday, volume 11, number 1 (January),
at http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/vie
w/1307/1227, accessed 4 January 2009.

Greg Walton, 2001. China’s Golden Shield. Montreal: International


Centre for Human Rights and Development, at http://www.ichrdd.ca,
accessed 1 January 2009.

Weiquan Wang (Chinese Human Rights Defenders), 2008. Zhongguo


wangluo jiankong yu fanjiankong niandu baogao (2007) [Annual report
on Chinese internet surveillance and censorship, and actions against
censorship and surveillance] (10 July) at http://crd–
net.org/Article/Class1/200807/20080710165332_9340.html, accessed 1
January 2009.

Qiang Xiao, 2007. “China censors Internet users with site bans, cartoon
cop spies,” San Francisco Chronicle (23 September),
at http://www.sfgate.com/cgi-bin/article.cgi?
f=/c/a/2007/09/23/INCLS80LO.DTL, accessed 4 January 2009.

Qiang Xiao, 2004. “The words you never see in Chinese


cyberspace,” China Digital Times (30 August),
at http://chinadigitaltimes.net/2004/08/the_words_you_n.php,
accessed 4 January 2009.

Guobin Yang, 2008. “Activists beyond Virtual Borders: Internet–


Mediated Networks and Informational Politics in China,” First Monday,
Special Issue number 7: Command Lines: The Emergence of
Governance in Global Cyberspace,
at http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/vie
w/1609/1524, accessed 4 January 2009.

Editorial history

Paper received 17 January 2009; accepted 25 January 2009.

https://journals.uic.edu/ojs/index.php/fm/article/view/2378/2089

You might also like