Access to dangerous knowledge: reflections on 9/11 ten years later
SPARC Open Access Newsletter, issue #161
September 2, 2011
by Peter Suber

Access to dangerous knowledge:  reflections on 9/11 ten years later

Less than a month after the 9/11 attacks ten years ago, the non-partisan, non-profit Project on Government Oversight (POGO) urged the US Department of Energy (DOE) to remove certain information from its website.  "[D]etailed maps and descriptions of all ten nuclear facilities with weapons-grade plutonium and highly-enriched uranium....contain virtual target information for terrorists."  The DOE agreed and took down the information.  Less than two months later, POGO protested that the DOE had removed too much information.  "Communities have a legitimate need and right to have information about what goes on in their neighborhoods."

Three years later, in 2004, the Nuclear Regulatory Commission removed some documents from its web site on the theory that they might be useful to terrorists.  One week later it put them back online.

Sometimes we put security first and decide later that we went too far.  Sometimes we go the other way.  In 1975, Martin Hellman developed a secure encryption algorithm that the National Security Agency feared would help our enemies.  The NSA tried to block its publication, but Hellman fought back and prevailed.  After the 9/11 attacks, however, he regretted his position.

It's hard to find the right balance.  It's hard to know whether we should even aim for balance.  Should we put principle ahead of balance and, if so, which principle?  If it's the principle to protect national security, that would give us one outcome, but if it's to protect the public's right to information, that would give us the opposite outcome.  If we retreat to balance, then who should strike it, under what criteria, and with what oversight? 

The hard question at the center of this thicket is whether we should restrict access to dangerous knowledge.  For now, let's define "dangerous knowledge" as any knowledge which can be put to harmful uses.  For example:  How to weaponize anthrax.  How to mix cement.  Where to buy armor-piercing bullets.  Where to buy fertilizer.  How to find the nearest nuclear processing plant.  How to find the nearest beach. 

The even-numbered examples are not facetious.  Some rare and exotic knowledge could cause harm (how to enrich uranium) and some common and indispensable knowledge could cause harm (how to drive a car).  We can't wish away this complexity in order to make safety an easy problem rather than a hard one.

The case for restricting access to dangerous knowledge is that restrictions could reduce harm and the risk of harm.  The case on the other side is captured by these four propositions:  (1) Essentially all useful knowledge has harmful uses, and hence could be deliberate or collateral damage in a danger-suppression regime; (2) restricting access for potential terrorists also restricts access for citizens, journalists, researchers, inventors, manufacturers, and policy-makers; (3) determined malefactors can generally find ways around access barriers which block ordinary citizens; and (4) protecting ourselves against danger generally requires the use of dangerous knowledge.

I don't want to reargue these two cases here.  But I do want to point out that the strengths of the second case are more evident in a moment of calm reflection than in the blood-boil of panic.  Moreover, we seldom think calmly about this question.  We generally turn our attention to it only when forced by moments of panic.  At those moments, the case for restricting access to dangerous knowledge has two significant advantages:  it's easier to boil down to a slogan, and its connection to safety is direct rather than indirect. 

The US hasn't suffered a major terrorist attack in ten years.  So why bring this up now?  The main reason is that we're one attack away from facing these questions all over again.  The next time we face these questions, we'll improvise a new set of answers.  We won't be calm, and we won't be in a mood to extract lessons from history.  If our experience ten years ago is any guide, our leaders will feel immense pressure to find a set of adequate-looking answers and to appear to be united about them.  The national mood will quickly limit our freedom to debate the answers.  (Dick Cheney:  To question our leaders is to help terrorism.)  Those who want to restrict the conversation will do so in the name of the freedom they want to restrict. 

I bring this up now because we're not distracted by panic.  We will make ourselves both safer and freer if we can think through the arguments pro and con, in a moment of calm, before we'll need answers again, before we'll be distracted by panic again, and before we'll be vulnerable again to a toxic mix of real fear and opportunistic fear-mongering.

To aid this reflection, I'd like to add two observations, one on each side. 

The first is that there really is such a thing as dangerous knowledge.  Some of it has little redeeming social value (how to weaponize anthrax) and some has plenty (how to mix cement); some of it creates risks of severe harm and some of it only creates risks of milder forms of harm.  These distinctions underlie our standing solution to the problem of dangerous knowledge:  making some knowledge classified, and punishing its public release.  The theory is that that when the risk of harm outweighs the benefits of public access, classification is justified.  As a general solution, it may itself be justifiable, but whether it is justified in practice depends on details and judgments which are themselves classified. 

The second is that the US has erred needlessly far on the side of safety.  We have studied our response ten years ago and should take note of the conclusions.

"Thomas H. Kean, the chairman of the 9/11 commission, said that three-quarters of the classified material he reviewed for the commission should not have been classified in the first place...."

A 2004 study by the Rand Corporation concluded that the U.S. federal government deleted too much previously-OA information from government web sites in the aftermath of the 9/11 attacks.

A 2004 report by the National Research Council concluded (as I paraphrased it at the time) not merely "that OA to genome data on pathogens is better than its suppression and better than toll access to the same data...[but that] the benefits of OA are worth the risk of smallpox, anthrax, and Ebola hemorrhagic fever --three pathogens whose genome sequences were already OA at the time the report was published.  Compare that to the claim, regarded as radical in some quarters, that the benefits of OA are worth the risk of decreased journal subscriptions."

A 2007 study by the National Research Council concluded that legitimate security concerns "do not justify the use of extreme measures that could serve to significantly disrupt the openness that has characterized the U.S. scientific and technology enterprises...."


When serious novels like Jurgen, Lady Chatterly's Lover, and Ulysses were suppressed for obscenity in the early 20th century, liberals defended the freedom of speech and opposed the censorship of literature by arguing that a novel never seduced anyone.  It wasn't a bad slogan, for a slogan.  But it had the insidious effect of denying the power of literature.  Literature can change people, and that power is part of the reason that literature is worth reading.  Can we oppose the censorship of literature without denying the power of literature? 

Likewise, knowledge is unexpectedly useful, for beneficial ends, harmful ends, or both, and that power is part of the reason that knowledge is worth acquiring.  Can we oppose access restrictions without denying the power of knowledge to do harm in the hands of a determined malefactor?

If it weren't for considerations like these, we could dodge the hard question by arguing that there's no such thing as dangerous knowledge.  We could try to maintain a firm distinction between knowledge and its uses.  Knowing the genome of smallpox or the mechanism of a fission reaction isn't the same thing as making a biological or nuclear weapon.  Fair enough, but this distinction doesn't stop censors or censorship.  People who want to make it harder to build biological or nuclear weapons could serenely admit that no knowledge is intrinsically dangerous, and limit their attention to knowledge with harmful uses.

The similar reflex argument that "guns don't kill people, people kill people" doesn't take seriously the power of guns.  If it didn't oversimplify, people interested in the free circulation of ideas could use a similar argument for knowledge:  "Knowledge doesn't build weapons of mass destruction, people build weapons of mass destruction."  But it oversimplifies.  Those who want to regulate guns are thinking of their harmful uses in the hands of malicious or careless people, and those who want to restrict access to dangerous knowledge are thinking of its harmful uses in the hands of malicious or careless people.  Defending the innocence of guns, novels, or knowledge, when artificially abstracted from people and their interests, doesn't get us very far.  It doesn't even address the hard question.


If something is sometimes harmful and sometimes not, then we might want to limit its circulation in order to limit the harm it causes.  For the moment put aside the question *whether* we ought to do that with knowledge.  Instead think about *how* we might do it.

When we want to limit the circulation of cigarettes, we ban them for people below a certain age, and we tax them to make them more expensive for everyone else.

Raising taxes on cigarettes does reduce their circulation.  But it does the job selectively and limits circulation much more for the poor than for the rich.  If we wanted to reduce access for everyone equally, or for a different subset of the population, then price tags do the job badly.

John Adams had a similar objection to the Stamp Act of 1765.  It made nearly every kind of paper more expensive, including the paper used by newspapers, magazines, broadsides, journals, books, and university diplomas.  The effect was to reduce the circulation of ideas to the poor.  No doubt, Adams would also have objected to the equal or uniform reduction in the circulation of ideas.  But he especially objected to this economically stratified way of doing it because he was convinced that the young republic needed a well-educated and well-informed population without regard to social rank.  (I owe this point about John Adams to Lewis Hyde, _Common as Air_, Farrar, Straus and Giroux, 2010, pp. 93-95.)

Apart from copyright, the chief restriction we put on the circulation of knowledge today is price.  Like the Stamp Act and tobacco taxes, journal prices create a much larger access barrier for the poor than for the rich.  If we wanted to reduce access only for dangerous people, and only for knowledge that could be used for dangerous purposes, then price tags do the job badly.  The majority of people denied access have harmless or beneficial uses in mind, and for the rare terrorists who have harmful uses in mind, price is the least of their concerns.

You might think that no one would seriously argue that using prices to restrict access to knowledge would contribute to a country's national and economic security.  But a vice president of the Association of American Publishers made that argument in 2006.  He "rejected the idea that the government should mandate that taxpayer financed research should be open to the public, saying he could not see how it was in the national interest. 'Remember -- you're talking about free online access to the world,' he said. 'You are talking about making our competitive research available to foreign governments and corporations.' "


If we wanted to keep dangerous knowledge out of the hands of dangerous people while keeping it accessible to everyone else, and if we agree that prices don't do this job, then how could we do it?

That's the right question to ask in a calm moment.  If we could thread this needle, we could reduce harm without harmfully reducing the circulation of information and ideas.  If it would necessarily fail, because keeping terrorists uninformed necessarily keeps citizens uninformed, then'd have to reconsider.

If we're willing to restrict knowledge for good people in order to restrict knowledge for bad people, at least when the risks of harm are sufficiently high, then we already have a classification system to do this, and the question is whether we run it properly.  On that question, we should reflect calmly on the relevant studies done after 9/11.

Meantime, let's take care not to mistake the OA connection.  First, even the Federal Research Public Access Act (FRPAA), the strongest and widest OA legislation ever proposed in any country, would exempt classified research from its OA mandate. 

Second, if we decide to make a serious effort to restrict access to dangerous knowledge, then we should not allow it to be published even in the conventional (TA) sense.  The 9/11 attackers were well-funded and could easily afford to read pay-per-view research.  Allowing TA publication but stopping short of OA would not solve the security or dangerous-knowledge problem.  In this sense, the dangerous-knowledge problem is not a narrow OA problem.  It's a wider publishing or free-expression problem.  Open challenge:  name any kind of knowledge that would be safe to publish in a TA journal but unsafe to make OA.

We can be sensitive to danger and still want OA for all publishable knowledge, just as we can be sensitive to danger and think that the United States overreacted badly after 9/11 in restricting access to knowledge.  But will we remember this when we need to?

* Postscript.  Here are the earlier installments in this series. 

Reflections on 9/11 [two weeks later] (2001)

Reflections on 9/11, one year later (2002)

Reflections on 9/11, three years later (2004)

Reflections on 9/11, four years later (2005)

(I didn't write one in 2003 or in 2006-2010).


Read this issue online

SOAN is published and sponsored by the Scholarly Publishing and Academic Resources Coalition (SPARC).

Additional support is provided by Data Conversion Laboratory (DCL), experts in converting research documents to XML.


This is the SPARC Open Access Newsletter (ISSN 1546-7821), written by Peter Suber and published by SPARC.  The views I express in this newsletter are my own and do not necessarily reflect those of SPARC or other sponsors.

To unsubscribe, send any message from the subscribed address to <>.

Please feel free to forward any issue of the newsletter to interested colleagues.  If you're reading a forwarded copy, you can subscribe by sending any message to <>.

SPARC home page for the Open Access Newsletter and Open Access Forum

SPARC Open Access Newsletter, archived back issues

Open Access Overview

Open Access Tracking Project

Open Access News blog

Peter Suber

SOAN is licensed under a Creative Commons Attribution 3.0 United States License.

Return to the Newsletter archive