Reflections on teaching the history of geography

I have a short piece in a forum on teaching the history of geography, organized by Innes Keighren for Progress in Human Geography. There are also contributions by Franklin Ginn, Scott Kirsch, Audrey Kobayashi, Simon Naylor and Jörn Seemann.

Here’s the abstract:

Drawing upon the personal reflections of geographical educators in Brazil, Canada, the UK, and the US, this Forum provides a state-of-the-discipline review of teaching in the history of geography; identifies the practical and pedagogical challenges associated with that teaching; and offers suggestions and provocations as to future innovation. The Forum shows how teaching in the history of geography is valued – as a tool of identity making, as a device for cohort building and professionalization, and as a means of interrogating the disciplinary present – but also how it is challenged by neoliberal educational policies, competing priorities in curriculum design, and sub-disciplinary divisions.

Interview with John Pickles

Matt Wilson and I interviewed John Pickles recently about the history of critical cartography, the Friday Harbor meetings in 1993 and Brian Harley. This has now appeared in Cartographica as part of a special issue marking the 25th anniversary of the publication of Harley’s “Deconstructing the Map” which was published there in 1989.

The whole issue is worth reading with contributions by John Krygier, Denis Wood, Martin Dodge, Matthew Edney, Sarah Elwood and others. It was edited by Reuben Rose-Redwood.

Project Cybersyn and the Origins of Algorithmic Life

One of the left’s commonly accepted stories about neoliberalism is that it got some of its first real-world tests in Pinochet’s Chile in the early 1970s. Following a coup and the violent end to socialist Salvador Allende’s government (in which Allende took his own life in the Presidential Palace), probably with the active assistance of the CIA, General Pinochet “invited” in the so-called “Chicago boys,” a group of neoliberal economists from the University of Chicago led by Milton Friedman.

The story is most canonically told in Naomi Klein’s 2010 book Shock Doctrine, in which she argues that disasters and crises were exploited by Friedman to usher in their practices of neoliberalism, privatization, and the free market economy. David Harvey’s influential text, a Brief History of Neoliberalism makes many of the same arguments.

What these arguments miss however, is an earlier development during Allende’s government itself, in which he too invited in a foreign expert to help run his economy. Except the expert who was brought in was not an economist, but a cybernetician, a British man named Stafford Beer. Beer established a partnership with Allende, and his Minister of the Economy, a man named Fernando Flores (who later would go on to write a well known computing text with  Terry Winograd called Understanding Computers and Cognitiona book I tried to read in grad school in the 1980s but failed to do so. (I was into AI, Douglas Hofstadter, etc at the time). Their goal was nothing less than the integration of cybernetics–the science of control and governance–with the running of the Chilean economy, anticipatory rather than reactionary planning, and the collection, transmission and correlation of information in real time. It was called Project Cybersyn or Proyecto Synco in Spanish.

Almost everything about this project is fantastic. What they achieved, and perhaps even more importantly, what they envisioned, was so ambitious as to defy imagination while at the same time making so many odd alliances, parallels and connections (as with Flores and Winograd) as to be almost unbelievable.

For instance, a communications network over the entire country at a time when the Internet was barely getting going in the USA. Another component was “Cyberfolk” in which users would be issued with a device known as an algedonic meter (Greek for pain/pleasure) to let the central command know how they were doing. It was literally a people meter:

(Algedonic is an unusual word which is first attested by the OED in 1894, but which readers of Gene Wolfe’s Book of the New Sun will recognize as part of the city of Nessus where both can be had. I read these books around 1981-2, remember the word very well, and am delighted to see it pop up here. I’d love to know if Wolfe derived this usage from Project Cybersyn, remembering that he is an engineer who might have read the trade mags where the project was described.)

And as shown at the top of the page, an ops room consisting of seven inward facing chairs (their design influenced by the famous Saarinen Tulip Chair, a version of which made it on the bridge of Star Trek’s Enterprise, with which the Ops room also shares design similarities).


This is the amalgamation of politics and the algorithmic governance of life on the scale of the entire country.

The prime source of information today on Project Cybersyn is Dr. Eden Medina, a professor of Informatics and Computing, and a historian of technology at Indiana University. Dr. Medina has written a book about it called Cybernetic Revolutionaries (2011), drawing on ten years of archival research in the Liverpoool John Moore’s University and interviews she conducted in the early to mid 2000s with surviving project members in Santiago, Chile. She’s also written a very interesting article called “Designing Freedom, Regulating a Nation: Socialist Cybernetics in Allende’s Chile” (2006) available here (pdf).

Here’s a recent talk she gave on “Big Data Lessons from our Cybernetic Past” about Project Cybersyn.

What fascinates me about Project Cybersyn is how it was an early form of algorithmic governance, as Dr. Medina has pointed out (see talk above). Remembering that programmable computers themselves (digital) had only been engineered some 30 years prior (I’m thinking of the Bletchley Park computers such as the Colossus and to a lesser extent the Bombe, which decrypted Enigma), and that Alan Turing’s concept of the universal computer had been invented in 1936, plus the lack of computers in Chile at the time (Medina provides an estimate of fewer than 50 computers in the whole country) it was a highly significant achievement.

At the same time it should be understood as completely in line with the modernist philosophy. Perhaps I depart a little from Dr. Medina’s approach here, in that I would say that it was not revolutionary in terms of its motivating rationality. I don’t mean this to take away from their vision or what they did with scarce resources and imagination. For instance, their communication network was based on a network of telex terminals originally installed to track satellites, rather than (like the-then nascent DARPA Internet), of computers. In fact Dr. Medina reveals that the project had only a single computer!

What I mean is that it comports with a modernist notion of knowledge as saving the day. This is, if you like, the Enlightenment perspective. Why was Foucault for instance so enamored with Kant, and specifically Kant’s piece on the Enlightenment which Foucault described as his (Foucault’s) “fetish text” because he was so obsessed with it? This is “Was ist Aufklarung?” collected along with Foucault’s responses in The Politics of Truth. The answer that Foucault gave, and which I think is essentially correct, is that Kant marks a new turn by analyzing the Enlightenment as asking the question, who are we, today? This is an epistemological question because it asks what are the sorts of knowledges we’ll need in order to see who we are, which is in turn an ontological question. It asks of knowledge what its limits are. Foucault calls this “critique.”

Project Cybersyn is a waystation on the way of this epistemological question. And despite its fascinating technological achievements and the way it applied that to politics and governance it is asking the same question. Although it certainly applied this question of knowledge in very important ways to the question of governance. Today we talk of Big Data and algorithmic governance to refer to approximately the same thing. I think we can understand Project Cybersyn (and other projects such as the Harvard Graphics Lab that I think exemplified some of the same enquiries and epistemologies, about which more in a later post) in that light.

I’d like to thank Dr. Medina for her work on bringing this important and fascinating project to a wider audience.

Esri introduces smart mapping

My colleague Mark Harrower, who is now at Esri, recently posted a blog story announcing Esri’s entry into what they are calling “smart mapping.”

The term itself is perhaps more interesting than the particulars of the technology Mark is talking about, although these are of course still important to understand. It draws from and wishes to leverage the whole assemblage of “smart” devices such as watches, TVs, cars, Nest thermostats and so on, as well as the rhetoric around smart cities, algorithmic governance, and Big Data.

Just to be clear, smart mapping, as a piece of terminology is not new. There’s a company in the UK with that name which says it has been round for 15 years, and another one called SmartMAP which says it has been around since 1995, part of a GIS company in Delaware.

In Esri’s case, Mark says that the idea is to provide your mapping tools with some capability to assess your data and recommend better ways to represent it:

Unlike ‘dumb’ software defaults that are the same every time, with smart defaults we now offer the right choices at the right time. When we see your data in the map viewer, we analyze it very quickly in a variety of ways so that the choices you see in front of you are driven by the nature of your data, the kind of map you want to create, and the kind of story you want to tell (e.g., I want to show places that are above and below the national average)

I find it interesting, if perhaps inevitable, that companies are appealing to the concept of “smart” mapping. “Making things better with algorithms” could easily be the slogan applied to many companies seeking an edge these days with their “disruptive” (but not too disruptive) innovation.

Perhaps the question is not whether these really are smart, as why  we think they are, why we like that, and what effects they will have on mapping practices?

Did the government get it right with Petraeus?

Commentators from both within the intelligence community (IC) and critics of the surveillant state have been unusually aligned in expressing shock that General David Petraeus has only been given a hand slap of a plea deal considering what classified secrets he leaked. Writing in the Daily Beast, Justin Miller and Nancy Youssef provide previously unknown details on what Petraeus gave to his mistress and biographer, Paula Broadwell:

While he was commander of coalition forces in Afghanistan, Petraeus “maintained bound, five-by-eight inch notebooks that contained his daily schedule and classified and unclassified notes he took during official meetings, conferences and briefings,” the U.S. Attorney’s Office for the Western District of North Carolina writes in a statement of fact regarding the case.

All eight books “collectively contained classified information regarding the identifies of covert officers, war strategy, intelligence capabilities and mechanisms, diplomatic discussions, quotes and deliberative discussions from high-level National Security Council meetings… and discussions with the president of the United States.”

That’s about a definitive list of precious “sources and methods” as you could wish to see enumerated. It’s not clear how much of this was classified (not all discussions with the President are classified) but by Petraeus’ own admission, it included codeword-level material, or TS//SCI (eg., TK, SI etc) usually described as “above top secret.” Codeword TK for example refers to Talent/Keyhole or spy satellite imagery which is so secret we’re not even allowed to know the capabilities of the cameras that take the pictures.

Marcy Wheeler, a well known expert in surveillance issues, didn’t hold back:

As a supine Congress sitting inside a scaffolded dome applauded Benjamin Netanyahu calling to reject a peace deal with Iran, DOJ quietly announced it had reached a plea deal with former CIA Director David Petraeus for leaking Top Secret/Secure Compartmented Information materials to his mistress, Paula Broadwell.

Not only did he affirmatively give these materials to non-cleared personnel, but he kept them in an unlocked drawer and a rucksack in his home. Petraeus also lied to the FBI about possessing classified material. Pretty bad opsec. China and the North Koreans could have saved a whole bunch of money just by hiring a couple of guys to break into his house. (Of course this is the guy who shared his Gmail login with Broadwell and left her messages in a draft folder so they supposedly wouldn’t be sent over a network, a dodgy practice familiar to both teenagers and terrorists.)

As numerous people have pointed out, this could be pretty bad for morale in the IC because of the premium placed on IC members to protect secrets. Naval War College professor John Schindler, who has labeled Edward Snowden a traitor, was left to tweet out examples of men who had died to protect classified intel:

(“Norks” is a pejorative slang term for North Korean.)

Just as critically, others have pointed to the unequal treatment meted out to others who either leaked far less sensitive and classified information, or who were also charged with lying to the FBI. These include Barrett Brown, John Kiriakou, and Jeffrey Sterling, who faces up to 20 years in prison for allegedly leaking details of a busted CIA operation to the author and journalist James Risen. (See his book State of War for details of Operation Merlin.)

Given the inequality in these sentences, that is, the lack of justice, we might well join in with the widespread condemnation not just of the sentence Petraeus received but of the man himself. Usually reliable IC defenders have been conspicuously silent (Overt Action, a blog run by former IC personnel, has not even mentioned the case except to denounce a leak that the FBI were recommending his indictment in February, this despite running frequent overviews called the Week in Intelligence–the latest was yesterday–of IC matters.) The Republican controlled Senate and House intelligence committee personnel have not defended Petraeus, despite the fact he was once touted as a Republican presidential candidate (the American Spectator compared him to Eisenhower). The silence, as they say, is deafening.

Yet there is a case to be made that it is not Petraeus who received the injustice, but rather Sterling, Brown, Kiriakou and even Risen himself (who was subpoenaed to reveal his source–presumed to be Sterling–before having the case against him dropped). The case depends on two assertions, first that there is a massive culture of over-classification, and second that it is very hard to prove harm in general is directly caused by leaks. (Specific leaks, such as the fact that Kim Philby was a Soviet double agent with access to the Venona project and told Moscow about it, are perhaps easier, though often you’re working with hypotheticals–the Soviets might have stopped re-using one-time pads anyway…).

Over-classification has been an issue even prior to 9/11 when then Senator Patrick Moynihan wrote his classic book Secrecy. (Moynihan was ironically one of the prime movers behind the declassification of the Venona project.) The Secrecy News blog at Federation of American Scientists by Steven Aftergood is dedicated to all the myriad ways over-classification is rampant.

There are of course secrets worth protecting (how to build a nuclear trigger is one that comes to mind). But Petraeus did not give classified material to a hostile or foreign agent, no harm has been cited to national security (Broadwell published none of the information according to the plea deal itself), even in the official statement of facts, and the documents were not formally classified (they were his own notes). Perhaps there is a case to made for sensible reaction to leaks–especially when they take the form of whistleblowing–rather than automatically reaching for the Espionage Act. Perhaps the government got it right? This case is a kind of test of what harm we think occurs when there is disclosure (unauthorized or not) of classified material. It is of course not an easy case of all protection is good/bad, all disclosure is good/bad, but of what reaction to disclosure should be, and on what grounds. The reaction and punishment here may offer better choices.

I do not write this to defend Petraeus and think he still got off too lightly. He held the TS//SCI in a non-secure location outside government premises, even delivering them into the possession of his biographer at one point, rather than in his residency’s SCIF as required, reintroducing the burglar scenario. (Compare the recent revelation, or really more widespread realization since this was already known, that Hillary Clinton used a private email address while Secretary of State and ponder what vulnerabilities this possibly introduces, not least immunity from FOIA and transparency. This seems a bigger case to me.)

Ironically, defendants can now cite this case (or try to; since it is a plea deal that never went to court it might provide something less than a true legal precedent) insofar as their cases follow the facts of this case to reduce their own sentences. This is something the government may come to regret, although perhaps we may see it as a better approach than blanket secrecy.

Late addition: after I had drafted but just before I published this I read this piece by Eli Lake which makes some of the same points.

Tracking cellphones through battery usage

In a development Harvard cyber security expert Bruce Schneier calls “interesting” but of unknown practicality, researchers have demonstrated how cellphones can be tracked via their battery usage. The basic principle is that along known routes you can record how much battery is used according to its distance from a cell phone tower.

Here’s a Wired story on it. Key quote:

PowerSpy takes advantage of the fact that a phone’s cellular transmissions use more power to reach a given cell tower the farther it travels from that tower, or when obstacles like buildings or mountains block its signal. That correlation between battery use and variables like environmental conditions and cell tower distance is strong enough that momentary power drains like a phone conversation or the use of another power-hungry app can be filtered out, Michalevsky says.

I guess Tobler still rides!

I discuss some of these more unusual forms of geolocational tracking in my new paper “Collect it All” available here (free) or at the journal here.

h/t several people who alerted me to this story: John Krygier (@poponthebeezer) and @agaleszczynski

Cyber security conference at New America today

A big conference on cyber security is occurring all day today at the New America Foundation.

It is entitled “Cybersecurity for a New America: Big Ideas and New Voices” and you can follow some of the commentary about it (including an interview with DIRNSA Rogers) on Twitter under the hashtag #NewAmCyber