How Censorship in China Allows Government Criticism but Silences Collective Expression

We offer the first large scale, multiple source analysis of the outcome of what may be the most extensive effort to selectively censor human expression ever implemented. To do this, we have devised a system to locate, download, and analyze the content of millions of social media posts originating from nearly 1,400 different social media services all over China before the Chinese government is able to find, evaluate, and censor (i.e., remove from the Internet) the subset they deem objectionable. Using modern computer-assisted text analytic methods that we adapt to and validate in the Chinese language, we compare the substantive content of posts censored to those not censored over time in each of 85 topic areas. Contrary to previous understandings, posts with negative, even vitriolic, criticism of the state, its leaders, and its policies are not more likely to be censored. Instead, we show that the censorship program is aimed at curtailing collective action by silencing comments that represent, reinforce, or spur social mobilization, regardless of content. Censorship is oriented toward attempting to forestall collective activities that are occurring now or may occur in the future—and, as such, seem to clearly expose government intent.


INTRODUCTION
T he size and sophistication of the Chinese government's program to selectively censor the expressed views of the Chinese people is unprecedented in recorded world history. Unlike in the U.S., where social media is centralized through a few providers, in China it is fractured across hundreds of local sites. Much of the responsibility for censorship is devolved to these Internet content providers, who may be fined or shut down if they fail to comply with government censorship guidelines. To comply with the government, each individual site privately employs up to 1,000 censors. Additionally, approximately 20,000-50,000 Internet police (wang jing) and Internet monitors (wang guanban) as well as an estimated 250,000-300,000 "50 cent party members" (wumao dang) at all levels of government-central, provincial, and local-participate in this huge effort (Chen and Ang 2011, and our interviews with informants, granted anonymity). China overall is tied with Burma at 187th of 197 countries on a scale of press freedom (Freedom House 2012), but the Chinese censorship effort is by far the largest.
In this article, we show that this program, designed to limit freedom of speech of the Chinese people, paradoxically also exposes an extraordinarily rich source of information about the Chinese government's interests, intentions, and goals-a subject of long-standing interest to the scholarly and policy communities. The information we unearth is available in continuous time, rather than the usual sporadic media reports of the leaders' sometimes visible actions. We use this new information to develop a theory of the overall purpose of the censorship program, and thus to reveal some of the most basic goals of the Chinese leadership that until now have been the subject of intense speculation but necessarily little empirical analysis. This information is also a treasure trove that can be used for many other scholarly (and practical) purposes.
Our central theoretical finding is that, contrary to much research and commentary, the purpose of the censorship program is not to suppress criticism of the state or the Communist Party. Indeed, despite widespread censorship of social media, we find that when the Chinese people write scathing criticisms of their government and its leaders, the probability that their post will be censored does not increase. Instead, we find that the purpose of the censorship program is to reduce the probability of collective action by clipping social ties whenever any collective movements are in evidence or expected. We demonstrate these points and then discuss their far-reaching implications for many research areas within the study of Chinese politics and comparative politics.
In the sections below, we begin by defining two theories of Chinese censorship. We then describe our unique data source and the unusual challenges involved in gathering it. We then lay out our strategy for analysis, give our results, and conclude. Appendixes include coding details, our automated Chinese text analysis methods, and hints about how censorship behavior presages government action outside the Internet.

GOVERNMENT INTENTIONS AND THE PURPOSE OF CENSORSHIP
Previous Indicators of Government Intent. Deciphering the opaque intentions and goals of the leaders of the Chinese regime was once the central focus of scholarly research on elite politics in China, where Western researchers used Kremlinology-or Pekingology-as a methodological strategy (Chang 1983;Charles 1966;Hinton 1955;MacFarquhar 1974MacFarquhar , 1983Schurmann 1966;Teiwes 1979). With the Cultural Revolution and with China's economic opening, more sources of data became available to researchers, and scholars shifted their focus to areas where information was more accessible. Studies of China today rely on government statistics, public opinion surveys, interviews with local officials, as well as measures of the visible actions of government officials and the government as a whole (Guo 2009;Kung and Chen 2011;Shih 2008;Tsai 2007a, b). These sources are well suited to answer other important political science questions, but in gauging government intent, they are widely known to be indirect, very sparsely sampled, and often of dubious value. For example, government statistics, such as the number of "mass incidents", could offer a view of government interests, but only if we could somehow separate true numbers from government manipulation. Similarly, sample surveys can be informative, but the government obviously keeps information from ordinary citizens, and even when respondents have the information researchers are seeking they may not be willing to express themselves freely. In situations where direct interviews with officials are possible, researchers are in the position of having to read tea leaves to ascertain what their informants really believe.
Measuring intent is all the more difficult with the sparse information coming from existing methods because the Chinese government is not a monolithic entity. In fact, in those instances when different agencies, leaders, or levels of government work at cross purposes, even the concept of a unitary intent or motivation may be difficult to define, much less measure. We cannot solve all these problems, but by providing more information about the state's revealed preferences through its censorship behavior, we may be somewhat better able to produce useful measures of intent.

Theories of Censorship.
We attempt to complement the important work on how censorship is conducted, and how the Internet may increase the space for public discourse (Duan 2007;Edmond 2012;Egorov, Guriev, and Sonin 2009;Xiao 2008, 2011;Herold 2011;Lindtner and Szablewicz 2011;MacKinnon 2012;Yang 2009;Xiao 2011), by beginning to build an empirically documented theory of why the government censors and what it is trying to achieve through this extensive program. While current scholarship draws the reasonable but broad conclusion that Chinese govern-ment censorship is aimed at maintaining the status quo for the current regime, we focus on what specifically the government believes is critical, and what actions it takes, to accomplish this goal.
To do this, we distinguish two theories of what constitutes the goals of the Chinese regime as implemented in their censorship program, each reflecting a different perspective on what threatens the stability of the regime. First is a state critique theory, which posits that the goal of the Chinese leadership is to suppress dissent, and to prune human expression that finds fault with elements of the Chinese state, its policies, or its leaders. The result is to make the sum total of available public expression more favorable to those in power. Many types of state critique are included in this idea, such as poor government performance.
Second is what we call the theory of collective action potential: the target of censorship is people who join together to express themselves collectively, stimulated by someone other than the government, and seem to have the potential to generate collective action. In this view, collective expression-many people communicating on social media on the same subject-regarding actual collective action, such as protests, as well as those about events that seem likely to generate collective action but have not yet done so, are likely to be censored. Whether social media posts with collective action potential find fault with or assign praise to the state, or are about subjects unrelated to the state, is unrelated to this theory.
An alternative way to describe what we call "collective action potential" is the apparent perspective of the Chinese government, where collective expression organized outside of governmental control equals factionalism and ultimately chaos and disorder. For example, on the eve of Communist Party's 90th birthday, the state-run Xinhua news agency issued an opinion that western-style parliamentary democracy would lead to a repetition of the turbulent factionalism of China's Cultural Revolution (http://j.mp/McRDXk). Similarly, at the Fourth Session of the 11th National People's Congress in March of 2011, Wu Bangguo, member of the Politburo Standing Committee and Chairman of the Standing Committee of the National People's Congress, said that "On the basis of China's conditions...we'll not employ a system of multiple parties holding office in rotation" in order to avoid "an abyss of internal disorder" (http://j.mp/Ldhp25). China observers have often noted the emphasis placed by the Chinese government on maintaining stability (Shirk 2007;Whyte 2010;Zhang et al. 2002), as well as the government's desire to limit collective action by clipping social ties (Perry 2002(Perry , 2008. The Chinese regime encounters a great deal of contention and collective action; according to Sun Liping, a professor of Sociology at Tsinghua University, China experienced 180,000 "mass incidents" in 2010 (http://j.mp/McQeji). Because the government encounters collective action frequently, it influences the actions and perceptions of the regime. The stated perspective of the Chinese government is that limitations on horizontal communications is a legitimate and effective action designed to protect its people (Perry 2010)-in other words, a paternalistic strategy to avoid chaos and disorder, given the conditions of Chinese society. Current scholarship has not been able to differentiate empirically between the two theories we offer. Marolt (2011) writes that online postings are censored when they "either criticize China's party state and its policies directly or advocate collective political action." MacKinnon (2012) argues that during the Wenzhou high speed rail crash, Internet content providers were asked to "track and censor critical postings." Esarey and Xiao (2008) find that Chinese bloggers use satire to convey criticism of the state in order to avoid harsh repression. Esarey and Xiao (2011) write that party leaders are most fearful of "Concerted efforts by influential netizens to pressure the government to change policy," but identify these pressures as criticism of the state. Shirk (2011) argues that the aim of censorship is to constrain the mobilization of political opposition, but her examples suggest that critical viewpoints are those that are suppressed.
Collective action in the form of protests is often thought to be the death knell of authoritarian regimes. Protests in East Germany, Eastern Europe, and most recently the Middle East have all preceded regime change (Ash 2002;Lohmann 1994;Przeworski et al. 2000). A great deal of scholarship on China has focused on what leads people to protest and their tactics (Blecher 2002;Cai 2002;Chen 2000;Lee 2007;O'Brien and Li 2006;Perry 2002Perry , 2008. The Chinese state seems focused on preventing protest at all costs-and, indeed, the prevalence of collective action is part of the formal evaluation criteria for local officials (Edin 2003). However, several recent works argue that authoritarian regimes may expect and welcome substantively narrow protests as a way of enhancing regime stability by identifying, and then dealing with, discontented communities (Dimitrov 2008;Lorentzen 2010;Chen 2012). Chen (2012) argues that small, isolated protests have a long tradition in China and are an expected part of government.
Outline of Results. The nature of the two theories means that either or both could be correct or incorrect. Here, we offer evidence that, with few exceptions, the answer is simple: state critique theory is incorrect and the theory of collective action potential is correct. Our data show that the Chinese censorship program allows for a wide variety of criticisms of the Chinese government, its officials, and its policies. As it turns out, censorship is primarily aimed at restricting the spread of information that may lead to collective action, regardless of whether or not the expression is in direct opposition to the state and whether or not it is related to government policies. Large increases in online volume are good predictors of censorship when these increases are associated with events related to collective action, e.g., protests on the ground. In addition, we measure sentiment within each of these events and show that during these events, the government censors views that are both supportive and critical of the state. These results reveal that the Chinese regime believes suppressing social media posts with collective action potential, rather than suppression of criticism, is crucial to maintaining power.

DATA
We describe here the challenges involved in collecting large quantities of detailed information that the Chinese government does not want anyone to see and goes to great lengths to prevent anyone from accessing. We discuss the types of censorship we study, our data collection process, the limitations of this study, and ways we organize the data for subsequent analyses.

Types of Censorship
Human expression is censored in Chinese social media in at least three ways, the last of which is the focus of our study. First is "The Great Firewall of China," which disallows certain entire Web sites from operating in the country. The Great Firewall is an obvious problem for foreign Internet firms, and for the Chinese people interacting with others outside of China on these services, but it does little to limit the expressive power of Chinese people who can find other sites to express themselves in similar ways. For example, Facebook is blocked in China but RenRen is a close substitute; similarly Sina Weibo is a popular Chinese clone of Twitter, which is also unavailable.
Second is "keyword blocking" which stops a user from posting text that contain banned words or phrases. This has limited effect on freedom of speech, since netizens do not find it difficult to outwit automated programs. To do so, they use analogies, metaphors, satire, and other evasions. The Chinese language offers novel evasions, such as substituting characters for those banned with others that have unrelated meanings but sound alike ("homophones") or look similar ("homographs"). An example of a homograph is , which has the nonsensical literal meaning of "eye field" but is used by World of Warcraft players to substitute for the banned but similarly shaped which means freedom. As an example of a homophone, the sound "hexie" is often written as , which means "river crab," but is used to refer to , which is the official state policy of a "harmonious society.' Once past the first two barriers to freedom of speech, the text gets posted on the Web and the censors read and remove those they find objectionable. As nearly as we can tell from the literature, observers, private conversations with those inside several governments, and an examination of the data, content filtering is in large part a manual effort-censors read post by hand. Automated methods appear to be an auxiliary part of this effort. Unlike The Great Firewall and keyword blocking, hand censoring cannot be evaded by clever phrasing. Thus, it is this last and most extensive form of censoring that we focus on in this article.

Collection
We begin with social media blogs in which it is at least possible for writers to express themselves fully, prior to

(b) All Sites excluding Sina
All tables and figures appear in color in the online version. This version can be found at http://j.mp/LdVXqN. possible censorship, and leaving to other research social media services that constrain authors to very short Twitter-like (weibo) posts (e.g., Bamman, O'Connor, and Smith 2012). In many countries, such as the U.S., almost all blog posts appear on a few large sites (Facebook, Google's blogspot, Tumblr, etc.); China does have some big sites such as sina.com, but a large portion of its social media landscape is finely distributed over numerous individual sites, e.g., local bbs forums. This difference poses a considerable logistical challenge for data collection-with different Web addresses, different software interfaces, different companies and local authorities monitoring those accessing the sites, different network reliabilities, access speeds, terms of use, and censorship modalities, and different ways of potentially hindering or stopping our data collection. Fortunately, the structure of Chinese social media also turns out to pose a special opportunity for studying localized control of collective expression, since the numerous local sites provide considerable information about the geolocation of posts, much more than is available even in the U.S.
The most complicated engineering challenges in our data collection process involves locating, accessing, and downloading posts from many Web sites before Internet content providers or the government reads and censors those that are deemed by authorities as objectionable; 1 revisiting each post frequently enough to learn if and when it was censored; and proceeding with data collection in so many places in China without affecting the system we were studying or being prevented from studying it. The reason we are able to accomplish this is because our data collection methods are highly automated whereas Chinese censorship entails manual effort. Our extensive engineering effort, which we do not detail here for obvious reasons, is executed at many locations around the world, including inside China.
Ultimately, we were able to locate, obtain access to, and download social media posts from 1,382 Chinese Web sites during the first half of 2011. The most striking feature of the structure of Chinese social media is its extremely long (power-law like) tail. Figure 1 gives a sample of the sites and their logos in Chinese (in panel (a)) and a pie chart of the number of posts that illustrate this long tail (in panel (b)). The largest sources of posts include blog.sina (with 59% of posts), hi.baidu, bbs.voc, bbs.m4, and tianya, but the tail keeps going. 2 Social media posts cover such a huge range of topics that a random sampling strategy attempting to cover everything is rarely informative about any individual topic of interest. Thus, we begin with a stratified random sampling design, organized hierarchically. We first choose eighty-five separate topic areas within three categories of hypothesized political sensitivity, ranging from ''high" (such as Ai Weiwei) to ''medium" (such as the one child policy) to ''low" (such as a popular online video game). We chose the specific topics within these categories by reviewing prior literature, consulting with China specialists, and studying current events. Appendix A gives a complete list. Then, within each topic area, defined by a set of keywords, we collected all social media posts over a six-month period. We examined the posts in each area, removed spam, and explored the content with the tool for computerassisted reading (Crosas et al. 2012; Grimmer and King (We repeated this procedure for other time periods, and in some cases in more depth for some issue areas, and overall collected and analyzed 11,382,221 posts.) All posts originating from sites in China were written in Chinese, and excluded those from Hong Kong and Taiwan. 3 For each post, we examined its content, placed it on a timeline according to topic area, and revisited the Web site from which it came repeatedly thereafter to determine whether it was censored. We supplemented this information with other specific data collections as needed.
The censors are not shy, and so we found it straightforward to distinguish (intentional) censorship from sporadic outages or transient time-out errors. The censored Web sites include notes such as "Sorry, the host you were looking for does not exist, has been deleted, or is being investigated" ( , ) and are sometimes even adorned with pictures of Jingjing and Chacha, Internet police cartoon characters ( ). Although our methods are faster than the Chinese censors, the censors nevertheless appear highly expert at their task. We illustrate this with analyses of random samples of posts surrounding the 9/27/2011 Shanghai Subway crash, and posts collected between 4/10/2012 and 4/12/2012 about Bo Xilai, a recently deposed member of the Chinese elite, and a separate collection of posts about his wife, Gu Kailai, who was accused and convicted of murder. We monitored each of the posts in these three areas continuously in near real time for nine days. (Censorship in other areas follow the same basic pattern.) Histograms of the time until censorship appear in Figure 2. For all three, the vast majority of censorship activity occurs within 24 hours of the original posting, although a few deletions occur longer than five days later. This is a remarkable organizational accomplishment, requiring large scale military-like precision: The many leaders at different levels of government and at different Internet content providers first need to come to a decision (by agreement, direct order, or compromise) about what to censor in each situation; they need to communicate it to tens or hundreds of thousands of individuals; and then they must all complete execution of the plan within roughly 24 hours. As Edmond (2012) points out, the proliferation of information sources on social media makes information more difficult to control; however, the Chinese government has overcome these obstacles on a national scale. Given the normal human difficulties of coming to agreement with many others, and the usual difficulty of achieving high levels of intercoder reliability on interpreting text (e.g., Hopkins and King 2010, Appendix B), the effort the government puts into its censorship program is large, and highly professional. We have found some evidence of disagreements within this large and multifarious bureaucracy, such as at different levels of government, but we have not yet studied these differences in detail.

Limitations
As we show below, our methodology reveals a great deal about the goals of the Chinese leadership, but it misses self-censorship and censorship that may occur before we are able to obtain the post in the first place; it also does not quantify the direct effects of The Great Firewall, keyword blocking, or search filtering in finding what others say. We have also not studied the effect of physical violence, such as the arrest of bloggers, or threats of the same. Although many officials and levels of government have a hand in the decisions about what and when to censor, our data only sometimes enable us to distinguish among these sources.
We are of course unable to determine the consequences of these limitations, although it is reasonable to expect that the most important of these are physical violence, threats, and the resulting selfcensorship. Although the social media data we analyze include expressions by millions of Chinese and cover an extremely wide range of topics and speech behavior, the presumably much smaller number of discussions we cannot observe are likely to be those of the most (or most urgent) interest to the Chinese government.
Finally, in the past, studies of Internet behavior were judged based on how well their measures approximated "real world" behavior; subsequently, online behavior has become such a large and important part of human life that the expressions observed in social media is now important in its own right, regardless of whether it is a good measure of non-Internet freedoms and behaviors. But either way, we offer little evidence here of connections between what we learn in social media and press freedom or other types of human expression in China.

ANALYSIS STRATEGY
Overall, an average of approximately 13% of all social media posts are censored. This average level is quite stable over time when aggregating over all posts in all areas, but masks enormous changes in volume of posts and censorship efforts. Our first hint of what might (not) be driving censorship rates is a surprisingly low correlation between our ex ante measure of political sensitivity and censorship: Censorship behavior in the low and medium categories was essentially the same (16% and 17%, respectively) and only marginally lower than the high category (24%). 4 Clearly something else is going on. To convey what this is, we now discuss our coding rules, our central hypothesis, and the exact operational procedures the Chinese government may use to censor.

Coding Rules
We discuss our coding rules in five steps. First, we begin with social media posts organized into the eighty-five topic areas defined by keywords from our stratified random sampling plan. Although we have conducted extensive checks that these are accurate (by reading large numbers and also via modern computer-assisted reading technology), our topic areas will inevitably (with any machine or human classification technology) include some posts that do not belong. We take the conservative approach of first drawing conclusions even when affected by this error. Afterward, we then do numerous checks (via the same techniques) after the fact to ensure we are not missing anything important. We report below the few patterns that could be construed as a systematic error; each one turns out to strengthen our conclusions.
Second, conversation in social media in almost all topic areas (and countries) is well known to be highly "bursty," that is, with periods of stability punctuated by occasional sharp spikes in volume around specific subjects (Ratkiewicz et al. 2010). We also found that with only two exceptions-pornography and criticisms of the censors, described below-censorship effort is often especially intense within volume bursts. Thus, we organize our data around these volume bursts. We 4 That all three figures are higher than the average level of 13% reflects the fact that the topic areas we picked ex ante had generated at least some public discussion and included posts about events with collective action potential. think of each of the eighty-five topic areas as a sixmonth time series of daily volume and detect bursts using the weights calculated from robust regression techniques to identify outlying observations from the rest of the time series (Huber 1964;Rousseeuw and Leroy 1987). In our data, this sophisticated burst detection algorithm is almost identical to using time periods with volume more than three standard deviations greater than the rest of the six-month period. With this procedure, we detected eighty-seven distinct volume bursts within sixty-seven of the eighty-five topic areas. 5 Third, we examined the posts in each volume burst and identified the real world event associated with the online conversation. This was easy and the results unambiguous.
Fourth, we classified each event into one of five content areas: (1) collective action potential, (2) criticism of the censors, (3) pornography, (4) government policies, and (5) other news. As with topic areas, each of these categories may include posts that are critical or not critical of the state, its leaders, and its policies. We define collective action as the pursuit of goals by more than one person controlled or spurred by actors other than government officials or their agents. Our theoretical category of "collective action potential" involves any event that has the potential to cause collective action, but to be conservative, and to ensure clear and replicable coding rules, we limit this category to events which (a) involve protest or organized crowd formation outside the Internet; (b) relate to individuals who have organized or incited collective action on the ground in the past; or (c) relate to nationalism or nationalist sentiment that have incited protest or collective action in the past. (Nationalism is treated separately because of its frequently demonstrated high potential to generate collective action and also to constrain foreign policy, an area which has long been viewed as a special prerogative of the government; Reilly 2012.) Events are categorized as criticism of censors if they pertain to government or nongovernment entities with control over censorship, including individuals and firms. Pornography includes advertisements and news about movies, Web sites, and other media containing pornographic or explicitly sexual content. Policies refer to government statements or reports of government activities pertaining to domestic or foreign policy. And "other news" refers to reporting on events, other than those which fall into one of the other four categories.
Finally, we conducted a study to verify the reliability of our event coding rules. To do this, we gave our rules above to two people familiar with Chinese politics and asked them to code each of the eighty-seven events (each associated with a volume burst) into one of the five categories. The coders worked independently and classified each of the events on their own. Decisions by the two coders agreed in 98.9% (i.e., eighty-six of eighty-seven) of the events. The only event with divergent codes was the pelting of Fang Binxing (the architect of China's Great Firewall) with shoes and eggs. This event included criticism of the censors and to some extent collective action because several people were working together to throw things at Fang. We broke the tie by counting this event as an example of criticism of the censors, but however this event is coded does not affect our results since we predict both will be censored.

Central Hypothesis
Our central hypothesis is that the government censors all posts in topic areas during volume bursts that discuss events with collective action potential. That is, the censors do not judge whether individual posts have collective action potential, perhaps in part because rates of intercoder reliability would likely be very low. In fact, Kuran (1989) and Lohmann (2002) show that it is information about a collective action event that propels collective action and so distinguishing this from explicit calls for collective action would be difficult if not impossible. Instead, we hypothesize that the censors make the much easier judgment, about whether the posts are on topics associated with events that have collective action potential, and they do it regardless of whether or not the posts criticize the state.
The censors also attempt to censor all posts in the categories of pornography and criticism of the censors, but not posts within event categories of government policies and news.

The Government's Operational Procedures
The exact operational procedures by which the Chinese government censors is of course not observed. But based on conversations with individuals within and close to the Chinese censorship apparatus, we believe our coding rules can be viewed as an approximation to them. (In fact, after a draft of our article was written and made public, we received communications confirming our story.) We define topic areas by hand, sort social media posts into topic areas by keywords, and detect volume bursts automatically via statistical methods for time series data on post volume. (These steps might be combined by the government to detect topics automatically based on spikes in posts with high similarity, but this would likely involve considerable error given inadequacies in known fully automated clustering technologies.) In some cases, identifying the real world event might occur before the burst, such as if the censors are secretly warned about an upcoming event (such as the imminent arrest of a dissident) that could spark collective action. Identifying events from bursts that were observed first would need to be implemented at least mostly by hand, perhaps with some help from algorithms that identify statistically improbable phrases. Finally, the actual decision to censor an individual post-which, according to our hypothesis, involves checking whether it is associated with a particular event-is almost surely accomplished largely by hand, since no known statistical or machine learning technology can achieve a level of accuracy anywhere near that which we observe in the Chinese censorship program. Here, censors may begin with keyword searches on the event identified, but will need to manually read through the resulting posts to censor those which are related to the triggering event. For example, when censors identified protests in Zengcheng as precipitating online discussion, they may have conducted a keyword search among posts for Zengcheng, but they would have had to read through these posts by hand to separate posts about protests from posts talking about Zengcheng in other contexts, say Zengcheng's lychee harvest.

RESULTS
We now offer three increasingly specific tests of our hypotheses. These tests are based on (1) post volume, (2) the nature of the event generating each volume burst, and (3) the specific content of the censored posts. Additionally, Appendix C gives some evidence that government censorship behavior paradoxically reveals the Chinese government's intent to act outside the Internet.

Post Volume
If the goal of censorship is to stop discussions with collective action potential, then we would expect more censorship during volume bursts than at other times. We also expect some bursts-those with collective action potential-to have much higher levels of censorship.
To begin to study this pattern, we define censorship magnitude for a topic area as the percent censored within a volume burst minus the percent censored outside all bursts. (The base rates, which vary very little across issue areas and which we present in detail in graphs below, do not impose empirically relevant ceiling or floor effects on this measure.) This is a stringent measure of the interests of the Chinese government because censoring during a volume burst is obviously more difficult owing to there being more posts to evaluate, less time to do it in, and little or no warning of when the event will take place.
Panel (a) in Figure 3 gives a histogram with results that appear to support our hypotheses. The results show that the bulk of volume bursts have a censorship magnitude centered around zero, but with an exceptionally long right tail (and no corresponding long left tail). Clearly volume bursts are often associated with dramatically higher levels of censorship even compared to the baseline during the rest of the six months for which we observe a topic area.

The Nature of Events Generating Volume Bursts
We now show that volume bursts generated by events pertaining to collective action, criticism of censors, and pornography are censored, albeit as we show in different ways, while post volume generated by discussion of government policy and other news are not.

(b) Censorship Magnitude by Event Type
We discuss the state critique hypothesis in the next subsection. Here, we offer three separate, and increasingly detailed, views of our present results. First, consider panel (b) of Figure 3, which takes the same distribution of censorship magnitude as in panel (a) and displays it by event type. The result is dramatic: events related to collective action, criticism of the censors, and pornography (in red, orange, and yellow) fall largely to the right, indicating high levels of censorship magnitude, while events related to policies and news fall to the left (in blue and purple). On average, censorship magnitude is 27% for collective action, but −1% and −4% for policy and news. 6 Second, we list the specific events with the highest and lowest levels of censorship magnitude. These appear, using the same color scheme, in Figure 4. The events with the highest collective action potential include protests in Inner Mongolia precipitated by the death of an ethnic Mongol herder by a coal truck driver, riots in Zengcheng by migrant workers over an altercation between a pregnant woman and security personnel, the arrest of artist/political dissident Ai Weiwei, and the bombings over land claims in Fuzhou. Notably, one of the highest "collective action potential" events was not political at all: following the Japanese earthquake and subsequent meltdown of the nuclear plant in Fukushima, a rumor spread through Zhejiang province that the iodine in salt would protect people from radiation exposure, and a mad rush to buy salt ensued. The rumor was biologically false, and had nothing to do with the state one way or the other, but it was highly censored; the reason appears to be because of the localized control of collective expression by actors other than the government. Indeed, we find that salt rumors on local Web sites are much more likely to be censored than salt rumors on national Web sites. 7 Consistent with our theory of collective action potential, some of the most highly censored events are not criticisms or even discussions of national policies, but rather highly localized collective expressions that represent or threaten group formation. One such example is posts on a local Wenzhou Web site expressing support for Chen Fei, a environmental activist who supported an environmental lottery to help local environmental protection. Even though Chen Fei is supported by the central government, all posts supporting him on the local Web site are censored, likely because of his record of organizing collective action. In the mid-2000s, Chen founded an environmental NGO ( ) with more than 400 registered members who created China's first "no-plastic-bag village," which eventually led to legislation on use of plastic bags. Another example is a heavily censored group of posts expressing collective anger about lead poisoning in Jiangsu Province's Suyang County from battery factories. These posts talk about children sickened by pollution from lead acid battery factories in Zhejiang province belonging to the Tianneng Group ( ), and report that hospitals refused to release results of lead tests to patients. In January 2011, villagers from Suyang gathered at the factory to demand answers. Such collective organization is not tolerated by the censors, regardless of whether it supports the government or criticizes it.
In all events categorized as having collective action potential, censorship within the event is more frequent than censorship outside the event. In addition, these events are, on average, considerably more censored than other types of events. These facts are consistent  with our theory that the censors are intentionally searching for and taking down posts related to events with collective action potential. However, we add to these tests one based on an examination of what might lead to different levels of censorship among events within this category: Although we have developed a quantitative measure, some of the events in this category clearly have more collective action potential than others. By studying the specific events, it is easy to see that events with the lowest levels of censorship magnitude generally have less collective action potential than the very highly censored cases, as consistent with our theory. To see this, consider the few events classified as collective action potential with the lowest levels of censorship magnitude. These include a volume burst associated with protests about ethnic stereotypes in the animated children's movie Kungfu Panda 2, which was properly classified as a collective action event, but its potential for future protests is obviously highly limited. Another example is Qian Yunhui, a village leader in Zhejiang, who led villagers to petition local governments for compensation for land seized and was then (supposedly accidentally) crushed to death by a truck. These two events involving Qian had high collective action potential, but both were before our observation period. In our period, there was an event that led to a volume burst around the much narrower and far less incendiary issue of how much money his family was given as a reparation payment for his death.
Finally, we give some more detailed information of a few examples of three types of events, each based on a random sample of posts in one topic area. First, Figure 5 gives four time series plots that initially involve low levels of censorship, followed by a volume spike during which we witness very high levels of censorship. Censorship in these examples are high in terms of the absolute number of censored posts and the percent of posts that are censored. The pattern in all four graphs (and others we do not show) is evident: the Chinese authorities disproportionately focus considerable censorship efforts during volume bursts.
We also went further and analyzed (by hand and via computer-assisted methods described in Grimmer and King 2011) the smaller number of uncensored posts during volume bursts associated with events that have collective action potential, such as in panel (a) of Figure 5 where the red area does not entirely cover the gray during the volume burst. In this event, and the vast majority of cases like this one, uncensored posts are not about the event, but just happen to have the keywords we used to identify the topic area. Again we find that the censors are highly accurate and aimed at increasing censorship magnitude. Automated methods of individual classification are not capable of this high a level of accuracy.
Second, we offer four time-series plots of random samples of posts in Figure 6 which illustrate topic areas with one or more volume bursts but without censorship. These cover important, controversial, and potentially incendiary topics-including policies involving the one child policy, education policy, and corruption, as well as news about power prices-but none of the volume bursts where associated with any localized Finally, we found that almost all of the topic areas exhibit censorship patterns portrayed by Figures  5 and 6. The two with divergent patterns can be seen in Figure 7. These topics involve analyses of random samples of posts in the areas of pornography (panel (a)) and criticism of the censors (panel (b)). What is distinctive about these topics compared to the remaining we studied is that censorship levels remain high consistently in the entire six-month period and, consequently, do not increase further during volume bursts. Similar to American politicians who talk about pornography as undercutting the "moral fiber" of the country, Chinese leaders describe it as violating public morality and damaging the health of young people, as well as promoting disorder and chaos; regardless, censorship in one form or another is often the consequence.
More striking is an oddly "inappropriate" behavior of the censors: They offer freedom to the Chinese people to criticize every political leader except for themselves, every policy except the one they implement, and every program except the one they run. Even within the strained logic the Chinese state uses to justify censorship, Figure 7 (panel (b))-which reveals consistently high levels of censored posts that involve criticisms of the censors-is remarkable.

Content of Censored and Uncensored Posts
Our final test involves comparing the content of censored and uncensored posts. State critique theory predicts that posts critical of the state are those censored, regardless of their collective action potential. In contrast, the theory of collective action potential predicts that posts related to collective action events will be censored regardless of whether they criticize or praise   To conduct this test in a very large number of posts, we need a method of automated text analysis that can accurately estimate the percentage of posts in each category of any given categorization scheme. We thus adapt to the Chinese language the methodology introduced in the English language by Hopkins and King (2010). This method does not require (inevitably error prone) machine translation, individual classification algorithms, or identification of a list of keywords associated with each category; instead, it requires a small number of posts read and categorized in the original Chinese. We conducted a series of rigorous validation tests and obtain highly accurate results-as accurate as if it were possible to read and code all the posts by hand, which of course is not feasible. We describe these methods, and give a sample of our validation tests, in Appendix B.
For our analyses, we use categories of posts that are (1) critical of the state, (2) supportive of the state, or (3) irrelevant or factual reports about the events. However, we are not interested in the percent of posts in each of these categories, which would be the usual output of the Hopkins and King procedure. We are also not interested in the percent of posts in each category among those posts which were censored and among those which were not censored, which would result from running the Hopkins-King procedure once on each set of data. Instead, we need to estimate and compare the percent of posts censored in each of the three categories. We thus develop a Bayesian procedure (described in Appendix B) to extend the Hopkins-King methodology to estimate our quantities of interest.
We first analyze specific events and then turn to a broader analysis of a random sample of posts from all of our events. For collective action events we choose those which unambiguously fit our definition-the arrest of the dissident Ai Weiwei, protests in Inner Mongolia, and bombings in reaction to the state's demolition of housing in Fuzhou city. Panel (a) of Figure 8 reports the percent of posts that are censored for each event, among those that criticize the state (right/red) and those which support the state (left/green); vertical bars are 95% confidence intervals. As is clear, regardless of whether the posts support or criticize the state, they are all censored at a high level, about 80% on average. Despite the conventional wisdom that the censorship program is designed to prune the Internet of posts critical of the state, a hypothesis test indicates that the percent censorship for posts that criticize the state is not larger than the percent censorship of posts that support the state, for each event. This clearly shows support for the collective action potential theory and against the state critique theory of censorship.
We also conduct a parallel analysis for three topics, taken from the analysis in Figure 6, that cover highly visible and apparently sensitive policies associated with events that had no collective action potential-one child policy, corruption policy, and news of increasing food prices. In this situation, we again get the empirical result that is consistent with our theory, in both analyses: Categories critical and supportive of the state both fall at about the same, low level of censorship, about 10% on average.
To validate that these results hold across all events, we randomly draw posts from all volume bursts with and without collective action potential. Figure 9 presents the results in parallel to those in Figure 8. Here, we see that categories critical and supportive of the state again fall at the same, high level of censorship for collective action potential events, while categories Not Collective Action Events Collective Action Events critical and supportive of the state fall at the same, low level of censorship for news and policy events. Again, there is no significant difference between the percent censored among those which criticize and support the state, but a large and significant difference between the percent censored among collective action potential and noncollective action potential events.
The results are unambiguous: posts are censored if they are in a topic area with collective action potential and not otherwise. Whether or not the posts are in favor continue in next column continued of the government, its leaders, and its policies has no measurable effect on the probability of censorship.
Finally, we conclude this section with some examples of posts to give some of the flavor of exactly what is going on in Chinese social media. First we offer two examples, not associated with collective action potential events, of posts not censored even though they are unambiguously against the state and its leaders. For example, consider this highly personal attack, naming the relevant locality: "This is a city government [Yulin City, Shaanxi] that treats life with contempt, this is government officials run amuck, a city government without justice, a city government that delights in that which is vulgar, a place where officials all have mistresses, a city government that is shameless with greed, a government that trades dignity for power, a government without humanity, a government that has no limits on immorality, a government that goes back on its word, a government that treats kindness with ingratitude, a government that cares nothing for posterity...." These posts are neither exceptions nor unusual: We have thousands more. Negative posts, including those about "sensitive" topics such as Tiananmen square or reform of China's one-party system, do not accidentally slip through a leaky or imperfect system. The evidence indicates that the censors have no intention of stopping them. Instead, they are focused on removing posts that have collective action potential, regardless of whether or not they cast the Chinese leadership and their policies in a favorable light.
continue in next column continued To emphasize this point, we now highlight the obverse condition by giving examples of two posts related to events with collective action potential that support the state but which nevertheless were quickly censored. During the bombings in Fuzhou, the government censored this post, which unambiguously condemns the actions of Qian Mingqi, the bomber, and explicitly praises the government's work on the issues of housing demolition, which precipitated the bombings: "The bombing led not only to the tragedy of his death but the death of many government workers. Even if we can verify what Qian Mingqi said on Weibo that the building demolition caused a great deal of personal damage, we should still condemn his extreme act of retribution.... The government has continually put forth measures and laws to protect the interests of citizens in building demolition. And the media has called attention to the plight of those experiencing housing demolition. The rate at which compensation for housing demolition has increased exceeds inflation. In many places, this compensation can change the fate of an entire family." , , . . . , , , , , Another example is the following censored post supporting the state. It accuses the local leader Ran Jianxin, whose death in police custody triggered protests in Lichuan, of corruption: "According to news from the Badong county propaganda department web site, when Ran Jianxin was party secretary in Lichuan, he exploited his position for personal gain in land requisition, building demolition, capital construction projects, etc. He accepted bribes, and is suspected of other criminal acts." , , , , ,

CONCLUDING REMARKS
The new data and methods we offer seem to reveal highly detailed information about the interests of the Chinese people, the Chinese censorship program, and the Chinese government over time and within different issue areas. These results also shed light on an enormously secretive government program designed to suppress information, as well as on the interests, intentions, and goals of the Chinese leadership.
The evidence suggests that when the leadership allowed social media to flourish in the country, they also allowed the full range of expression of negative and positive comments about the state, its policies, and its leaders. As a result, government policies sometimes look as bad, and leaders can be as embarrassed, as is often the case with elected politicians in democratic countries, but, as they seem to recognize, looking bad does not threaten their hold on power so long as they manage to eliminate discussions associated with events that have collective action potential-where a locus of power and control, other than the government, influences the behaviors of masses of Chinese people. With respect to this type of speech, the Chinese people are individually free but collectively in chains.
Much research could be conducted on the implications of this governmental strategy; as a spur to this research, we offer some initial speculations here. For one, so long as collective action is prevented, social media can be an excellent way to obtain effective measures of the views of the populace about specific public policies and experiences with the many parts of Chinese government and the performance of public officials. As such, this "loosening" up on the constraints on public expression may, at the same time, be an effective governmental tool in learning how to satisfy, and ultimately mollify, the masses. From this perspective, the surprising empirical patterns we discover may well be a theoretically optimal strategy for a regime to use social media to maintain a hold on power. For example, Dimitrov (2008) argues that regimes collapse when its people stop bringing grievances to the state, since it is an indicator that the state is no longer regarded as legitimate. Similarly, Egorov, Guriev, and Sonin (2009) argue that dictators with low natural resource endowments allow freer media in order to improve bureaucratic performance. By extension, this suggests that allowing criticism, as we found the Chinese leadership does, may legitimize the state and help the regime maintain power. Indeed, Lorentzen (2012) develops a formal model in which an authoritarian regimes balance media openness with regime censorship in order to minimize local corruption while maintaining regime stability. Perhaps the formal theory community will find ways of improving their theories after conditioning on our empirical results.
More generally, beyond the findings of this article, the data collected represent a new way to study China and different dimensions of Chinese politics, as well as facets of comparative politics more broadly. For the study of China, our approach sheds light on authoritarian resilience, center-local relations, subnational politics, international relations, and Chinese foreign policy. By examining what events are censored at the national level versus a subnational level, our approach indicates some areas where local governments can act autonomously. Additionally, by clearly revealing government intent, our approach allows an examination of the differences between the priorities of various subnational units of government. Because we can analyze social media and censorship in the content of real-world events, this approach is able to reveal insights into China's international relations and foreign policy. For example, do displays of nationalism constrain the government's foreign policy options and activities? Finally, China's censorship apparatus can be thought of as one of the input institutions. Nathan (2003) identifies as an important source of authoritarian resilience, and the effectiveness and capabilities of the censorship apparatus may shed light on the CCP's regime institutionalization and longevity.
In the context of comparative politics, our work could directly reveal information about state capacity as well as shed light on the durability of authoritarian regimes and regime change. Recent work on the role of Internet and social media in the Arab spring (Ada et al. 2012;Bellin 2012) debate the exact role played by these technologies in organizing collective action and motivating regional diffusion, but consistently highlight the relevance of these technological innovations on the longevity of authoritarian regimes worldwide. Edmond (2012) models how the increase in information sources (e.g., Internet, social media) will be bad for a regime unless the regime has economies of scale in controlling information sources. While Internet and social media in general have smaller economies of scale, because of how China devolves the bulk of censorship responsibility to Internet content providers, the regime maintains large economies of scale in the face of new technologies. China, as a relatively rich and resilient authoritarian regime, with a sophisticated and effective censorship apparatus, is probably being watched closely by autocrats from around the world.
Beyond learning the broad aims of the Chinese censorship program, we seem to have unearthed a valuable source of continuous time information on the interests of the Chinese people and the intentions and goals of the Chinese government. Although we illustrated this with time series in 85 different topic areas, the effort could be expanded to many other areas chosen ex ante or even discovered as online communities form around new subjects over time. The censorship behavior we observe may be predictive of future actions outside the Internet (see Appendix C), is informative even when the traditional media is silent, and would likely serve a variety of other scholarly and practical uses in government policy and business relations.
Along the way, we also developed methods of computer-assisted text analysis that we demonstrate work well in the Chinese language and adapted it to this application. These methods would seem to be of use far beyond our specific application. We also conjecture that our data collection procedures, text analysis methods, engineering infrastructure, theories, and overall analytic and empirical strategies might be applicable in other parts of the world that suppress freedom of the press.

APPENDIX B: AUTOMATED CHINESE TEXT ANALYSIS
We begin with methods of automated text analysis developed in Hopkins and King (2010) and now widely used in academia and private industry. This approach enables one to define a set of mutually exclusive and exhaustive categories, to then code a small number of example posts within each category (known as the labeled "training set"), and to infer the proportion of posts within each category in a potentially much larger "test set" without hand coding their category labels. The methodology is colloquially known as "ReadMe," which is the name of open source software program that implements it.
We adapt and extend this method for our purposes in four steps. First, we translate different binary representations of Chinese text to the same unicode representation. Second, we eliminate punctuation and drop characters that do not appear in fewer than 1% or more than 99% of our posts. Since words in Chinese are composed of one to five characters, but without any spacing or punctuation to demarcate them, we experimented with methods of automatically "chunking" the characters into estimates of words; however, we found that ReadMe was highly accurate without this complication.
And finally, whereas ReadMe returns the proportion of posts in each category, our quantity of interest here is the proportion of posts which are censored in each category. We therefore run ReadMe twice, once for the set of censored posts (which we denote C) and once for the set of uncensored posts (which we denote U). For any one of the mutually exclusive categories, which we denote A, we calculate the proportion censored, P(C|A) via an application of Bayes theorem: P(C|A) = P(A|C)P(C) P(A) = P(A|C)P(C) P(A|C)P(C) + P(A|U)P(U) .
Quantities P(A|C), P(A|U) are estimated by ReadMe whereas P(C) and P(U) are the observed proportions of censored and uncensored posts in the data. Therefore, we can back out P(C|A). We produce confidence intervals for P(C|A) by simulation: we merely plug in simulations for each of the right side components from their respective posterior distributions. This procedure requires no translation, machine or otherwise. It does not require methods of individual classification, which are not sufficiently accurate for estimating category proportions. The methodology is considered a "computerassisted" approach because it amplifies the human intelligence used to create the training set rather than the highly error-prone process of requiring humans to assist the computer in deciding which words lead to which meaning.
Finally, we validate this procedure with many analyses like the following, each in a different subset of our data. First, we train native Chinese speakers to code Chinese language blog posts into a given set of categories. For this illustration, we use 1,000 posts about the labor strikes in 2010, and set aside 100 as the training set. The remaining 900 constituted the test set. The categories were (a) facts supporting employers, (b) facts supporting workers, (c) opinions supporting workers, and (d) opinions supporting employers (or irrelevant). The true proportion of posts censored (given vertically) in each of four categories (given horizontally) in the test set is indicated by four black dots in Figure 10. Using the text and categories from the training set and only the text from the test set, we estimate these proportions using our procedure above. The confidence intervals, represented as simulations from

APPENDIX C: THE PREDICTIVE CONTENT OF CENSORSHIP BEHAVIOR
If censorship is a measure of government intentions and desires, then it may offer some hints about future state action unavailable through other means. We test this hypothesis here. However, most actions of the Chinese state are easily predictable comments on or responses to exogenous events. The difficult cases are those which are not otherwise predictable; among those hard cases, we focus on the ones associated with events with collective action potential.
We did not design this study or our data collection for predictive purposes, but we can still use it as an indirect test of our hypothesis. We do this via well-known and widely used case-control methodology (King and Zeng 2001). First, we take all real world events with collective action potential and remove those easy to predict as responses to exogenous events. This left two events, neither of which could have been predicted with information in the traditional news media: the 4/3/11 arrest of Ai Weiwei and the 6/25/11 peace agreement with Vietnam regarding disputes in the South China Sea. We analyze these two cases here and provide evidence that we may have been able to predict them from censorship rates. In addition, as we were finalizing this article in early 2012, the Bo Xilai incident shook China-an event widely viewed as "the biggest scandal to rock China's political class for decades" (Branigan 2012) and one which "will continue to haunt the next generation of Chinese leaders" (Economy 2012)-and we happened to still have our monitors running. This meant that we could use this third surprise event as another test of our hypothesis.
Next, we choose how long in advance censorship behavior could be used to predict these (otherwise surprise) events. The time interval must be long enough so that the censors can do their job and so we can detect systematic changes in the percent censored, but not so long as to make the prediction impossible. We choose five days as fitting these constraints, the exact value of which is of course arbitrary Predicted % censor trend based on 1/18−1/28 data but in our data not crucial. Thus we hypothesize that the Chinese leadership took an (otherwise unobserved) decision to act approximately five days in advance and prepared for it by making censorship patterns different from what they would have been otherwise.
In panel (a) of Figure 11, we apply the procedure to the surprise arrest of Ai Weiwei. The vertical axis in this time series plot is the percent of posts censored. The gray area is our five-day prediction interval between the unobserved hypothesized decision to arrest Ai Weiwei and the actual arrest. Nothing in the news media we have been able to find suggested that an arrest was imminent. The solid (blue) line is actual censorship levels and the dashed (red) line is a simple linear prediction based only on data greater than five days earlier than the arrest; extrapolating it linearly five days forward gives an estimate of what would have happened without this hypothesized decision. Then the vertical difference between the dashed (red) and solid (blue) lines on April 3rd is our causal estimate; in this case, the predicted level, if no decision had been made, is at about baseline levels at approximately 10%; in contrast, the actual levels of censorship is more than twice as high. To confirm that this result was not due to chance, we conducted a permutation test, using all other five-day intervals in the data as placebo tests, and found that the effect in the graph is larger than all the placebo tests.
We repeat the procedure for the South China Sea peace agreement in panel (b) of Figure 11. The discovery of oil in the South China Sea led to an ongoing conflict between Beijing and Hanoi, during which rates of censorship soared. According to the media, conflict continued right up until the surprise peace agreement was announced on June 25. Nothing in the media before that date hinted at a resolution of the conflict. However, rates of censorship unexpectedly plummeted well before that date, clearly presaging the agreement. We also conducted a permutation test here and again found that the effect in the graph is larger than all the placebo tests.
Finally, we turn to the Bo Xilai incident. Bo, the son one of the eight elders of the CCP, was thought to be a front runner for promotion to the Politburo Standing Committee in CPC 18th National Congress in Fall of 2012. However, his political rise met an abrupt end following his top lieutenant, Wang Lijun, seeking asylum at the American consulate in Chengdu on February 6, 2012, four days after Wang was demoted by Bo. After Wang revealed Bo's alleged involvement in homicide of a British national, Bo was removed as Chongqing party chief and suspended from the Politburo. Because of the extraordinary nature of this event in re-vealing the behaviors and disagreements among the CCP's top leadership, we conducted a special analysis of the otherwise unpredictable event that precipitated this scandalthe demotion of Wang Lijun by Bo Xilai on February 2, 2012. It is thought that Bo demoted Wang when Wang confronted Bo with evidence of his involved in the death of Neil Heywood.
We thus apply the same methodology to the demotion of Wang Lijun in panel (c) of Figure 11, and again see a large difference in actual and predicted percent censorship before Wang's demotion. Prior to Wang's dismissal, nothing in the media hinted at the demotion that would lead to the spectacular downfall of one of China's rising leaders. And for the third of three cases, a permutation test reveals that the effect in the five days prior to Wang's demotion is larger than all the placebo tests.
The results in all three cases confirm our theory, but we conducted this analysis retrospectively, and with only three events, and so further research to validate the ability of censorship to predict events in real time prospectively would certainly be valuable.