Category Archives: Secrecy

Did the government get it right with Petraeus?

Commentators from both within the intelligence community (IC) and critics of the surveillant state have been unusually aligned in expressing shock that General David Petraeus has only been given a hand slap of a plea deal considering what classified secrets he leaked. Writing in the Daily Beast, Justin Miller and Nancy Youssef provide previously unknown details on what Petraeus gave to his mistress and biographer, Paula Broadwell:

While he was commander of coalition forces in Afghanistan, Petraeus “maintained bound, five-by-eight inch notebooks that contained his daily schedule and classified and unclassified notes he took during official meetings, conferences and briefings,” the U.S. Attorney’s Office for the Western District of North Carolina writes in a statement of fact regarding the case.

All eight books “collectively contained classified information regarding the identifies of covert officers, war strategy, intelligence capabilities and mechanisms, diplomatic discussions, quotes and deliberative discussions from high-level National Security Council meetings… and discussions with the president of the United States.”

That’s about a definitive list of precious “sources and methods” as you could wish to see enumerated. It’s not clear how much of this was classified (not all discussions with the President are classified) but by Petraeus’ own admission, it included codeword-level material, or TS//SCI (eg., TK, SI etc) usually described as “above top secret.” Codeword TK for example refers to Talent/Keyhole or spy satellite imagery which is so secret we’re not even allowed to know the capabilities of the cameras that take the pictures.

Marcy Wheeler, a well known expert in surveillance issues, didn’t hold back:

As a supine Congress sitting inside a scaffolded dome applauded Benjamin Netanyahu calling to reject a peace deal with Iran, DOJ quietly announced it had reached a plea deal with former CIA Director David Petraeus for leaking Top Secret/Secure Compartmented Information materials to his mistress, Paula Broadwell.

Not only did he affirmatively give these materials to non-cleared personnel, but he kept them in an unlocked drawer and a rucksack in his home. Petraeus also lied to the FBI about possessing classified material. Pretty bad opsec. China and the North Koreans could have saved a whole bunch of money just by hiring a couple of guys to break into his house. (Of course this is the guy who shared his Gmail login with Broadwell and left her messages in a draft folder so they supposedly wouldn’t be sent over a network, a dodgy practice familiar to both teenagers and terrorists.)

As numerous people have pointed out, this could be pretty bad for morale in the IC because of the premium placed on IC members to protect secrets. Naval War College professor John Schindler, who has labeled Edward Snowden a traitor, was left to tweet out examples of men who had died to protect classified intel:

(“Norks” is a pejorative slang term for North Korean.)

Just as critically, others have pointed to the unequal treatment meted out to others who either leaked far less sensitive and classified information, or who were also charged with lying to the FBI. These include Barrett Brown, John Kiriakou, and Jeffrey Sterling, who faces up to 20 years in prison for allegedly leaking details of a busted CIA operation to the author and journalist James Risen. (See his book State of War for details of Operation Merlin.)

Given the inequality in these sentences, that is, the lack of justice, we might well join in with the widespread condemnation not just of the sentence Petraeus received but of the man himself. Usually reliable IC defenders have been conspicuously silent (Overt Action, a blog run by former IC personnel, has not even mentioned the case except to denounce a leak that the FBI were recommending his indictment in February, this despite running frequent overviews called the Week in Intelligence–the latest was yesterday–of IC matters.) The Republican controlled Senate and House intelligence committee personnel have not defended Petraeus, despite the fact he was once touted as a Republican presidential candidate (the American Spectator compared him to Eisenhower). The silence, as they say, is deafening.

Yet there is a case to be made that it is not Petraeus who received the injustice, but rather Sterling, Brown, Kiriakou and even Risen himself (who was subpoenaed to reveal his source–presumed to be Sterling–before having the case against him dropped). The case depends on two assertions, first that there is a massive culture of over-classification, and second that it is very hard to prove harm in general is directly caused by leaks. (Specific leaks, such as the fact that Kim Philby was a Soviet double agent with access to the Venona project and told Moscow about it, are perhaps easier, though often you’re working with hypotheticals–the Soviets might have stopped re-using one-time pads anyway…).

Over-classification has been an issue even prior to 9/11 when then Senator Patrick Moynihan wrote his classic book Secrecy. (Moynihan was ironically one of the prime movers behind the declassification of the Venona project.) The Secrecy News blog at Federation of American Scientists by Steven Aftergood is dedicated to all the myriad ways over-classification is rampant.

There are of course secrets worth protecting (how to build a nuclear trigger is one that comes to mind). But Petraeus did not give classified material to a hostile or foreign agent, no harm has been cited to national security (Broadwell published none of the information according to the plea deal itself), even in the official statement of facts, and the documents were not formally classified (they were his own notes). Perhaps there is a case to made for sensible reaction to leaks–especially when they take the form of whistleblowing–rather than automatically reaching for the Espionage Act. Perhaps the government got it right? This case is a kind of test of what harm we think occurs when there is disclosure (unauthorized or not) of classified material. It is of course not an easy case of all protection is good/bad, all disclosure is good/bad, but of what reaction to disclosure should be, and on what grounds. The reaction and punishment here may offer better choices.

I do not write this to defend Petraeus and think he still got off too lightly. He held the TS//SCI in a non-secure location outside government premises, even delivering them into the possession of his biographer at one point, rather than in his residency’s SCIF as required, reintroducing the burglar scenario. (Compare the recent revelation, or really more widespread realization since this was already known, that Hillary Clinton used a private email address while Secretary of State and ponder what vulnerabilities this possibly introduces, not least immunity from FOIA and transparency. This seems a bigger case to me.)

Ironically, defendants can now cite this case (or try to; since it is a plea deal that never went to court it might provide something less than a true legal precedent) insofar as their cases follow the facts of this case to reduce their own sentences. This is something the government may come to regret, although perhaps we may see it as a better approach than blanket secrecy.

Late addition: after I had drafted but just before I published this I read this piece by Eli Lake which makes some of the same points.

Steven Aftergood reviews Black Box Society in Nature

Review of Frank Pasquale’s The Black Box Society by Steven Aftergood (who runs the Secrecy News blog) in Nature.

Everyone who uses the Internet for entertainment, education, news or commerce is implicated in a web of data collection whose breadth surpasses ordinary awareness.

Last May, a US Senate investigation reported that a single visit to a popular tabloid-news website triggered activity on more than 350 other web servers. Most of those contacts, including delivery of advertisements, are likely to be benign. But they typically deposit a software ‘cookie’ on the visitor’s computer; these enable the identification and tracking of visitors, generating digital profiles of their interests and patterns of online behaviour.

Continues here.

Crypto-geographies and the Internet of Things

Secret codes have long fascinated people. According to Secret History, a new history of cryptology by Craig Bauer, who was Scholar-In-Residence at the NSA Center for Cryptologic History in 2011-12, cryptography predates the Greeks. Many of these ciphers were relatively simple by today’s standards, involving either transposition or substitution (respectively systems where the letters are moved but not replaced, and where the letters are replaced, eg., A is replaced by Z, etc).

The now fairly well-known Enigma machine, deciphered by British scientists at Bletchley Park (and the subject of many books and a couple of movies) is pictured above. This was a German system of ciphering, used by the German Nazi regime during WWII. Less well-known (but undeservedly so) are the decryptions by the NSA and its predecessor group (The US Army Signals Intelligence Service located at Arlington Hall, a former girl’s school in Virginia) of the so-called Venona traffic. Venona refers to the project to decrypt Soviet diplomatic communications with its agents in the USA and elsewhere. These encrypted messages often referred to codenames of American spies working for the Soviets during the war. With the help of investigations by the FBI the US government was able to identify many of these people, based on the partial decryptions. According to the NSA and most (but not all) historians, these included Julius and Ethel Rosenberg, Klaus Fuchs, and several serving OSS personnel.

The Soviets were tipped off to the fact that the US was decrypting their messages (probably by Kim Philby, the British spy who was posted to the US for a time), and stopped using their one-time encryption pads. Nevertheless the project to decrypt the messages continued until the early 1980s, eventually yielding about 2,900 partially decrypted messages. They remained a closely guarded secret long after their operational worth had dwindled, and it was only with the publication in 1987 of Spycatcher, by Peter Wright, a former British intelligence officer, that the project was referred to by its codename in public. (Publication of Spycatcher was embargoed by Margaret Thatcher’s government in the UK, but Wright succeeded in publishing it in Australia anyway.)

Some terms: “Cryptography” is the science (and art) of creating ciphers. “Cryptanalysis” is the effort of deciphering them without the key. “Cryptology” is both of these, to include the assessment of the security of a cipher, comparing ciphers and so on. The words are Greek from kryptos (κρυπτός) meaning hidden, secret.

Is there such a thing as cryptologic geographies? If not, could there be, and of what would it consist? In other words, are there (non-trivial) geographies of encryption? Here are some ideas.

One of my earliest ideas of this was a geography of https, the secure version of web-browsing (now coming into vogue but still greatly variable). The New York Times recently laid down a challenge to make https default by the end of 2015 if other media companies would do the same. This is non-trivial, because if encrypted messages are more secure than non-encrypted ones, then the latter will reveal weaknesses in the internet. These weaknesses could be exploited. Second, if you are sending emails and other communications over the internet in non-encrypted form, then this is easier for governments to intercept and monitor.

And this is not just to do with messages you write, but also other parts of the personal datastream. For example, your location. What if you could record, but encrypt your geolocation to take advantage of services offered by apps (eg Google Maps) in such a way that they could not be intercepted, decrypted and exploited by third parties (including the government)? Would this mean that the web and internet would “go dark” as officials warn? And would criminals and terrorists be afforded protection in those dark spaces? That was certainly the message of the Attorney General and the FBI Director a few days ago in response to plans by Apple and Google to implement better encryption. AG Holder:

said quick access to phone data can help law enforcement officers find and protect victims, such as those targeted by kidnappers and sexual predators.

Justice Department officials said Holder is merely asking for cooperation from the companies at this time.

And how universal would this advantage to users, potential criminals and law enforcement be? And would those places where one of these had an advantage necessarily overlap with the others? That is, what would be the differential access to encryption from place to place or group to group–a digital divide of encryption?

Is there a political economy of encryption? Who are the companies and individuals working on encryption in the commercial sector? To what extent is there movement between the private and public sectors of both cryptology expertise and personnel? Further, to what extent is there better crypotography in the government and intelligence community than there is in the commercial sector? What are the implications of allowing backdoors to encryption algorithms that can “only” be broken by the government but not by third parties? (I’m thinking here of the well-known proposal in the 1990s for the “Clipper Chip” which allowed just such a backdoor for the NSA but was met with such opposition that it was not implemented.) Is such a backdoor safe from third party hacking, and if so, for how long? (And what is an acceptable definition of “safe” here?). A geographical analysis of these questions would imply some access to where and who has installed the systems in question, which might be provided by basic research efforts such as those carried out at the Oxford Internet Institute by Mark Graham and his colleagues.

Do other computer systems have vulnerabilities? That is, ones without designed-in backdoors? If so, where are they? When it comes to exploits and vulnerabilities, what are the implications of announcing them vs. hoarding them (eg, so-called zero-day exploits)? Is there differential access to knowledge about exploits and vulnerabilities? Where? Again, who makes money off this? What is the crypto- value-chain?

Speaking of hacking; there are a huge array of secret attempts (and thus crypto- if not cryptologic) to break into, disrupt, or exploit systems (and an equally expansive range of countermeasures). The Department of Defense has estimated there may be up to 10 million hacking attacks per day. Most of these are probably automated scans, according to Adam Segal, a cybersecurity expert at the Council on Foreign Relations.

What systems are vulnerable to these exploits, and what exploits are being carried out? Here we could examine mundane events such as DDOS, where antagonists attempt to bring down a web server to deny its proper function, to more exotic events such as the US/Israeli Stuxnet virus meant to disrupt Iranian nuclear programs (but which had effects well beyond Iran once the virus was in the wild). (For more on this virus/worm, see the Stuxnet Dossier [pdf] compiled by Symantec.)

We often hear in the news that certain countries (Russia, China) are more responsible for intrusions and exploits than others, but I’m not aware of any detailed work on this sort of cryptogeography. The recent JP Morgan vulnerability affected more than 83 million US households (who? why?), according to the NYT, and actually included another 9 banks not previously reported. The NYT also said the attack was carried out by hackers having “at least loose connections with officials of the Russian government.” But that is a very imprecise and sketchy account. Just recently, a new poll showed bipartisan low levels of confidence among Americans in the “government’s ability to protect their personal safety and economic security.” Here government is arguably failing at its job of providing security. Ferguson and domestic homicides were mentioned specifically in the AP story. Do people feel threatened by the JP Morgan hacks, the Target and other breaches?

There is surely a whole economy of knock-on effects that result from this; so again, we can speculate about a political economy of crytogeographies.

What would a better map of hacking attempts look like? Security companies and telcos track these data, as for example in this map created by Norse which describes itself as “a global leader in live attack intelligence.” Who is this company? How do they earn their money? More importantly, what is the nature of this market sector more generally?

mass-attack-norse-map-100315099-orig
(Click for live version.)

The above map however is to a large extent a misrepresentation because it only shows attacks on their honeypots, not the entirety of the internet, or even the entirety of a particular region or network.

A similar visualization, again covering the globe by country, is offered by Kaspersky Labs.

ScreenClip1
(Click for live version)

These are not per se all that analytically valuable, although they are visually striking (if somewhat derivative).

What do these attacks do, and to whom do they do it? It would be interesting to do a geopolitical analysis of the Stuxnet worm here, which has received a fair amount of coverage. Stuxnet would make an interesting case study, although it remains to be seen how representative it is (being created by state actors against the nuclear capabilities of another state). As stated above, most attacks are undirected and opportunistic. A Congressional Research Services (CRS) Report on Stuxnet examined the national security implications of the attack, and of course there is a long history of the study of cyberattacks and cyberwarfare going back several decades. But I’m not aware that geographers have contributed to this literature in a geopolitical sense.

For some, these concerns are especially paramount in the context of smart cities, big data and automated (“smart”) controls–including the so-called smart grid and the Internet of Things (IoT). Take utilities and smart meters for instance. There are minimally two concerns–that hackers could access smart controls and take command of critical infrastructure, and second, that data held in smart meters may be legally accessible under surveillance laws by the government. Another CRS report in 2012 warned that current legislation “would appear to permit law enforcement to access smart meter data for investigative purposes under procedures provided in the SCA, ECPA, and the Foreign Intelligence Surveillance Act (FISA)”. Although we hear a lot about surveillance of phone and internet communications, there is as yet much less on surveillance of other big data sources. Luckily I have a paper coming out on that topic but needless to say much more needs to be done.

Cryptologic geographies would appear to be a fertile field for investigation. Broadly conceived to include geopolitical implications, big data, regulation and policy, governance, security, the Internet of Things, cybergeographies, and justice, there is a need for intervention here to both clarify our understanding, and intervene in policy and political debate. Certainly other scholars are already doing so (eg., Internet Governance Project paper on whether cyberwarfare is a new Cold War, pdf).

The mass of connected computer systems and devices known as the Internet of Things will surely only intensify issues of security, encryption and governance. The crypto-geographies of these are highly important to sort through. This post is an attempt to highlight what issues are at stake and to provide some initial ideas.

cfp: AAG Tampa 2014: “What Space for the Post-Security State?”

AAG 2014 CFP

 “What Space for the Post-Security State?”

 Tampa, Florida, 8-12 April 2014

 Session organizers: Jeremy Crampton, University of Kentucky, Klaus Dodds, Peter Adey (Royal Holloway University of London)

 Session sponsored by the Political Geography Specialty Group

This session takes up recent challenges to the logics of security (Neocleous, Vine, the CASE Collective), and seeks papers that open up new ways of thinking about security through critiques, oppositions, limits, resistances, or different kinds of security altogether (e.g. alter-security).

The goal is to collectively sketch the contours of a possible “post-security” state in which security’s costs as well as its benefits are more critically understood. Where today’s security is usually positioned as “more is better” and “safer rather than sorry”, our goal is not to necessarily reject security, but rather to identify a range of different interventions, critiques (perhaps “affirmative” McCormack, 2012), alternatives, that might think with security in productive ways or, indeed, new ways.

Our agenda is to seek positions that are not always outside or external to security apparatus, or so unaware of their location that the where of security is lost. We seek perspectives that unsettle the relationship between security and the state, such as its (potentially ever greater) privately administered projects and outsourcing. What manners of security are possible that might be creative hybrids of the state-private-communal spectrum?  Can we identify alternative propositions to the pernicious investment of what Paul Amar has called the “human-security state” (Amar 2013), legitimized by appropriating a more progressive religious, gender, class and sexual politics?

Examples of possible paper topics include:

–ways in which the national security state is itself inherently insecure as evidenced through “moles,” spies, whistleblowing and “insider threats” such as Manning and Snowden;

–the environmental costs of security installations;

–the economic costs of security;

–military resource extraction;

–properties of violence (Correia, 2013);

–military landscapes;

–geographies of “baseworld”

–borderland securitization struggles;

–the admixtures of race, gender and rural-urban relations in modern incarceration regimes;

–health impacts of security including an estimated half million Americans with PTSD;

–“big data” and surveillance;

–histories of the security and surveillant state;

–private security and security outsourcing (security beyond the state);

–the sustainability of current practices of security or vulnerability and resilience to security.

— new languages or grammars of security and post-security

We seek papers that will address any of these or other related topics we have not listed. If in doubt, please contact us!

Our session deliberately seeks to continue and deepen interdisciplinary exchanges, and we welcome contributions from geography, political science, economics; sociology, environmental science, international relations, political sociology, psychology, computer science, the creative arts, and history.

If you are interested in participating, please submit an abstract of no more than 250 words to Jeremy Crampton (jcrampton@uky.edu). The conference discounted registration ends on October 23, 2013. For more information please see http://www.aag.org/cs/annualmeeting.

Still “in denial” ten years later?

This post discusses the Haynes/Klehr book In Denial, before moving on to consider Anne Godlewska’s charge that Neil Smith downplayed Stalin’s murderous regime.

                          Capture

Ten years ago John Earl Haynes (a historian at the Library of Congress) and Harvey Klehr (a political scientist at Emory University) wrote a provocative book called In Denial: Historians, Communism and Espionage. Their main thesis was deliberately meant to be troubling to a strand of historians they dub “the revisionists” in contrast to their own position as “traditionalists.”

Haynes and Klehr accuse the revisionists of denying historical facts, because of a reluctance, driven by ideology, to properly assess the negative and damaging consequences of communism, especially communism in the United States. In other words American leftists can’t accept the failures of communism, and that some figures on the left were active spies for the Soviet Union during WWII and the Cold War.

The evidence is now in, they say, to show conclusively that figures such as Alger Hiss, Julius and Ethel Rosenberg, Harry Dexter White, Maurice Halperin, Lauchlin Currie, and I.F. Stone were Soviet agents. This is in addition to confessed agents such as Whittaker Chambers and Elizabeth Bentley, the latter of whom named some 80 people in the US as Soviet agents, some of them in government. Others were also accused but either not convicted or there was never been any evidence of spying; Owen Lattimore, one of the so-called “China hands” is a case in point. (Bentley does not mention him in her list of accused agents, and when questioned by HUAC, had very little or no information about him.)

The Institute of Pacific Relations and Amerasia occupy a middle ground; some of their employees or members were arrested. In the latter case, since the OSS had broken into their offices, Watergate style without a warrant, the cases did not come to trial.

Lattimore is particularly fascinating given his importance in the McCarthy hearings (he was famously labeled the top Soviet spy in America by McCarthy at one point). But his case also shows the “small world” of scholar-intelligencers at the time. Hired at Johns Hopkins in 1938 by Isaiah Bowman, when the HUAC hearings were taking place in the early 1950s the subsequent head at Hopkins, George F. Carter, privately opposed Lattimore, and  even “ran to McCarthy with [a] story about Lattimore declassifying secret documents in 1950” (Robert Newman, 1992, Owen Lattimore and the ‘Loss’ of China, p. 411). Carter was a former OSS officer who worked on the Far Eastern desk in R&A (with Chauncy Harris; the section was headed by John Appleton). Newman notes that “Carter became a pariah at the Johns Hopkins campus and in the 1950s moved to Texas” (Newman 1992, p. 135) (this latter comment attributed to Abel Wolman,  the father of geographer “Reds” Wolman who was also at JHU).

If the accusations against Lattimore fizzled out, and certainly never lived up to McCarthy’s expectations (he’d said he was prepared to stand or fall on this one case alone)–there are also no records in Venona or the Vassiliev files–it is still the case that Lattimore and many of the others named here had their lives and careers profoundly affected. (I’m currently focusing on Maurice Halperin, but wish to return to Lattimore and Carter before too long.)

For quite some time the evidence they allude to was sealed in various archives or held secretly by the NSA, but beginning from the 1990s, and in many cases due to the work of Haynes and Klehr themselves, fresh evidence has become available to the public, and to scholars. This includes the Venona cables, nearly 3,000 partially and fully deciphered Soviet messages sent during the war to agents in the US, now declassified b y the NSA. Also during the 1990s for a brief time the Soviet Union’s KGB archives were opened, as well as the archives of the Comintern (they have since been closed again). Finally, in the late 1990s, the personnel and Security Office documents of the OSS were released to the National Archives. Since some of the people identified above were at the OSS (eg., Halperin, see my previous post on Soviet agents in the OSS) these records may contain internal OSS reports on its own staff.

So how does Haynes and Klehr’s argument stand ten years later? Reading the book recently for the first time was a rather dislocating experience. Although it carries a 2003 copyright, the book could well have been written in 1993. Many of the cases it dwells on hark from a previous generation of scholarship. John Lowenthal for example, who was a Hiss defender (and brother of the geographer-historian David Lowenthal) died in 2003 at the age of 78. Another of their favorite targets, Ellen Schrecker is, according to her Wikipedia page, now 75 (her last book was published in 2010 on the corporatization of academia).

It would be interesting to see how historians of communism, the Soviet Union, and the Cold War regard these cases today. In some cases, it’s a matter of emphasis and of understanding the full historical context in which these espionage activities took place. The facts themselves are not necessarily sufficient; they must be interpreted and given meaning (even if both sides were operating from the same facts, which is not always the case). For example, does one understand what was going on, and therefore emphasize, as anti-Fascism or as Soviet infiltration and control? If Hiss and White are more settled there are always others who might still be contentious: I.F. Stone is perhaps the best example of this. Spy or not a spy?

There is also a distinction between seeing oneself working for the general aims of communism, and working for the Soviets (even if at the expense of your country, in some way). Is it possible or desirable to separate the two like this? This is a central issue in these matters, and was felt even by those directly involved. See this cable on Elizabeth Bentley for example, which describes her conflicting attitudes about her activities and who she was working for (cipher cable to KGB from the US):

Mer [Iskhak Akhmerov, illegal KGB officer in USA] re Clever Girl [Bentley cover name] 15.06.44

In her work and conversations she usually behaves like our operative, in her comments she says “we,” implying our organization and including herself in this concept. I’ve written you that since my first meeting with her she has known perfectly well that she’s working for us. As a rule, she willingly carries out my instructions and reports everything to me about our people. Her behavior changes, however, when I ask her to arrange a meeting for me with “Pal” [Nathan Gregory Silvermaster, leader of a major spy ring] or to get any of the probationers [Soviet agents in USA] in contact with our operative. She becomes a completely different person and, apparently restraining herself, declares that she isn’t our operative, that she works for “Helmsman” [Earl Browder, head of CPUSA].

She tends to distinguish between us [USSR] and the fellowcountrymen [CPUSA] and bitterly notes that we only have a professional interest in certain issues. She says that we all care little about Americans, that the USSR is the only country we love and for which we work. I tried to explain to her that she is wrong, that both I and our other operatives think the same way as “Pal,” “Raid” [Victor Perlo, leader of another major spy ring] and the others, that by helping the USSR, we are working out of deeply held ideological motives and we don’t stop being Americans. I told her that “Pal,” “Raid” and the others who are consciously helping us love America just as before, and that she must understand that we are doing important work for our cause.
(Source: Vassiliev White Noteook #2, pp. 5-6, emphasis added)

Now, one would be justified in seeing Bentley’s attitude as naiveté. To separate out Soviet interests, regardless of your intent, is to not see the full picture. This is also the basis of accusations against Edward Snowden (variously called a traitor and defector).

In geography there is one similar case I can think of, and it’s worth recalling it here. At the “Author meets the critics” session at the AAG on Neil Smith’s book American Empire, Anne Godlewska made some fairly critical remarks on Smith’s failure to account for Stalinism’s murderous history (she accused Smith of making a “historical misrepresentation” (see Godlewska, Pol. Geog. 2005, p. 260). She went on the note that “most historians have argued for over 25 million Russians killed” by Stalin and remarked that Neil suffered from a “peculiar blindness vis-a-vis the Soviet empire” (p. 261).

At the session itself (which was well-attended so perhaps others can check my memory) Smith appeared completely taken aback by these comments, and said to his former co-author “we need to talk about this over a drink!” In his published response Smith said:

As someone whose vision of the Soviet Union is influenced by the  political critique of one of Stalin’s victims, Leon Trotsky, it is disappointing– actually preposterous–to be so misread as to be dubbed an apologist for Stalinism.

But there is a further aspect to this. I have to say that this was not a book about Stalin, actually, but a book about Isaiah Bowman and the American empire.

This exchange encapsulates the difficulties of studying these issues and of making charges of “denialism.” Was Smith “in denial” or was his book about other matters? Were Bentley and her fellow Soviet agents in denial? Is Snowden in denial? Were leftists during the second half of the 20th century in denial? To pose the question is to see its limitations.

Bentley and others in her circle knew full well they were working for the Soviets. Snowden’s case is more complicated but I think he cannot be seen as a traitor or Soviet agent. Smith’s case is also ambiguous but he does not deserve to be seen as a denialist or apologist (and Godlewska does not use these terms). As he noted:

On the question of how many people died in the USSR between 1929 and 1945,then, I nonetheless stand corrected. Recent scholarship does place the figures in the millions although of course there is massive ideologically driven disagreement about the actual figures.

It is this ideology which is driving both Smith in his downplaying of the Soviet role, and Haynes and Klehr in their over-emphasis on the same thing. In a way, both are guilty of still fighting the Cold War (both books were published in the same year), whereas with ten years’ hindsight we can more clearly see the need for a complexified and contextualized understanding of motives, political contexts (including political persecution and abuse of power), and historical tensions.

The use by Haynes and Klehr of the terms “revisionist” and “traditionalist” in fact betrays their own partisan position, even as they would have us believe that they are reporting objectively. A revisionist is someone who wants to revise how history occurred (as in revisionists and denialists of the holocaust; a parallel they use themselves). One chapter in fact is called “revising history” implying that history happens and then the revisionists come along and want to revise what actually happened. A “traditionalist” on the other hand, is someone who wants to hew to the way the historical chips actually fell. All three terms: denial, revisionist and traditionalist are kind of false terms here, introduced whether deliberately or inadvertently, to divert us from a complexified and properly contextualized understanding.

Thanks for reading this far! Obviously I’m just starting work on these events and issues, but there will hopefully be something on scholar-intelligencers or whatever you would like to call them, that will emerge from this.

A history of the secret

What is the secret, and what is its relation to privacy? Some musings, on reading Clare Birchall’s 2011 paper “Transparency Interrupted” in Theory, Culture & Society.

Etymology:

secret (n.)Look up secret at Dictionary.comlate 14c., from Latin secretus “set apart, withdrawn, hidden,” past participle of secernere “to set apart,” from se- “without, apart,” properly “on one’s own” (from PIE *sed-, from root *s(w)e-; see idiom) + cernere “separate” (see crisis). As an adjective from c.1400. Secret agent first recorded 1715; secret service is from 1737; secret weapon is from 1936.

Of course linked to this is the word private:

private (adj.)Look up private at Dictionary.comlate 14c., “pertaining or belonging to oneself, not shared, individual; not open to the public;” of a religious rule, “not shared by Christians generally, distinctive; from Latin privatus “set apart, belonging to oneself (not to the state), peculiar, personal,” used in contrast to publicus, communis; past participle of privare “to separate, deprive,” from privus “one’s own, individual,” from PIE*prei-wo-, from PIE *prai-, *prei-, from root *per- (1) “forward, through” (see per). Old English in this sense had syndrig. Private grew popular 17c. as an alternative to common (adj.), which had overtones of condescention. Of persons, “not holding public office,” recorded from early 15c. In private “privily” is from 1580s.

To “set apart” and to make private (that is, not available to the public). There are two components: to make private, and an act of separation. Where private means unto oneself, “one’s own” (cf. idiom, a dialect spoken locally or in a small area). And and act to separate or segregate.

(A sense of this remains in officialdom-speak; the word “segregable” is sometimes used in relation to records which can be separated out and declassified from a larger collection of classified records.)

Yet notice also the sense of loss that this entails: to deprive (privation), or to take away. Presumably what is taken away is the sense and benefit of belonging, of not being alone, an “idiot” (Gk. idiotes, private person, especially one without skill, or professional knowledge, a layman).

It is noticeable then that privacy retains more than a bit of the idea of keeping secret. Where the classic definition is “the right to be let alone” in the classic Warren and Brandeis opinion, here we see also the sense of being apart, of not generally being available to everybody (especially the state no doubt).

This allows us then to examine a tentative opposite to the secret, ie., transparency, more critically. As Claire Birchall puts it:

Transparency assumes a secret that can be excavated and brought to light, just as it might suppose a text that can be fully readable (if not, what would be the point of transparency?). Derrida refers to a secret which is unknowable rather than just unknown. It is unknowable not because it is particularly enigmatic, but because knowledge, an event, a person, a poem, text or thing is not ‘there’, not present, in the way that we commonly understand it to be. And so, in any communication, any expression of knowledge, something is always ‘held back’. What is ‘held back’ is in no way held in a reserve, waiting to be discovered. Rather, there is a singular excess that cannot fully show itself: a non-signifying, non-present remainder. For Derrida, the absolute secret resides in the structural limits upon the knowability of the present (of events, meaning, texts and so on). In this sense, there will always be something secret.

In some ways this is straightforward; there is a limit to “knowability.” Full transparency (as access to the full truth, or full knowledge) cannot be achieved. Derrida went on to say that if everything must be made public, if a “right to the secret is not maintained” we are in a totalitarian space (Derrida, A Taste for the Secret, p. 59). On the other hand, at the same time, there is the concept of how democracy could occur without paying attention to your friends, to others in general, if one is totally closed off (private), as obligations/responsibility to community. (Democracy ought to guarantee both the right to reply and the right to remain silent, but does neither, pp. 26-7.)

Donald Rumsfeld said something similar in 2002 concerning limits to knowledge:

There are known knowns; there are things we know that we know.
There are known unknowns; that is to say, there are things that we now know we don’t know.
But there are also unknown unknowns – there are things we do not know we don’t know.

In this typology of knowledge, there are several questions to consider, including why say “known knowns” and not just “knowns”? Or, having started that reflexive move, why end there (there are known known-knowns, etc.).

But there is also one combination missing; “unknown knowns.” What could this mean? How can you both know and not know something at the same time? The philosopher Slavoj Žižek offers an interesting solution. Here we must first know something, but then repress or suppress it.

Žižek was writing in 2004 about the Abu Ghraib scandal, which he interprets as the Freudian unconscious, with a quote from Lacan.

However, and perhaps you can see where I’m going with this, an equally commendable interpretation of unknown knowns are those that have been hidden or segregated; they exist but are not generally accessible.

In other words, what Rumsfeld missed, was precisely the secret.

In her piece Birchall begins to “recuperate” secrecy, as she calls it. Long associated with right-wing governments, she sees instead a valid role for it on the left. To do this she urges us to think about the commons–a secrecy commons. For example, WikiLeaks is interesting and radical not because it brings transparency, but because it uses secrecy to place knowledge into the commons. More specifically, it is non-statist, virtual, distributed, and “largely anonymous” (although this last criterion is effectively unmasked by now, especially with the imprisonment and trial of Bradley Manning as its source).

This is not a bad proposal, although it is perhaps not a complete solution. Indeed it is similar to one I made in my recent (2012) piece in Geopolitics “Outsourcing the State” (Downloads tab). There I argued that there is a process of informational “spin-offs” happening, whereby the government is seeing, indeed happily participating in, an “epistemic shift in sovereignty” (p. 688):

This is by no means to be understood as a central government trying to suppress challenges, or of the state in crisis. Rather, it is the state itself that is outsourcing and spinning off its capabilities in an unprecedented manner, especially in the defence and intelligence sectors. Paradoxically, WikiLeaks is part of this outsourcing, and the insecurities of it playing in this larger game reveal much about how it is supposed to be played – and who can play it and profit from it.

Despite writing the piece 13 months before the Snowden revelations about the extent of contracting in the intelligence community, I cannot claim originality to that insight, which came as a result of thinking through Matt Hannah’s book, Dark Territory, and writing our forthcoming paper on intelligence outsourcing. The phrase “epistemic sovereignty” is Hannah’s, and means who has control over knowledge (usually of course, the state in an informational asymmetry, but here posed as a question or “shift”).

One of the problems here is that we know even less about corporate operations in the DoD and intelligence community than we do about the government. This is not an argument to return everything to the government, or of course that transparency will fix everything. Nevertheless it is an observation opposed to Birchall’s who sees secrecy as a way to bypass neoliberalism, whereas I think it is in line with it.

A secrecy commons may be a good idea, but how long before it is colonized? Or can it keep outpacing capital? Ironically, what may drive capital–technological innovation–may also be required to continually escape it. Where Birchall is a bit out of date already on WikiLeaks, her point that secrecy is worth recuperating, still remains I think, to be explored.