One of the left’s commonly accepted stories about neoliberalism is that it got some of its first real-world tests in Pinochet’s Chile in the early 1970s. Following a coup and the violent end to socialist Salvador Allende’s government (in which Allende took his own life in the Presidential Palace), probably with the active assistance of the CIA, General Pinochet “invited” in the so-called “Chicago boys,” a group of neoliberal economists from the University of Chicago led by Milton Friedman.
The story is most canonically told in Naomi Klein’s 2010 book Shock Doctrine, in which she argues that disasters and crises were exploited by Friedman to usher in their practices of neoliberalism, privatization, and the free market economy. David Harvey’s influential text, a Brief History of Neoliberalism makes many of the same arguments.
What these arguments miss however, is an earlier development during Allende’s government itself, in which he too invited in a foreign expert to help run his economy. Except the expert who was brought in was not an economist, but a cybernetician, a British man named Stafford Beer. Beer established a partnership with Allende, and his Minister of the Economy, a man named Fernando Flores (who later would go on to write a well known computing text with Terry Winograd called Understanding Computers and Cognition—a book I tried to read in grad school in the 1980s but failed to do so. (I was into AI, Douglas Hofstadter, etc at the time). Their goal was nothing less than the integration of cybernetics–the science of control and governance–with the running of the Chilean economy, anticipatory rather than reactionary planning, and the collection, transmission and correlation of information in real time. It was called Project Cybersyn or Proyecto Synco in Spanish.
Almost everything about this project is fantastic. What they achieved, and perhaps even more importantly, what they envisioned, was so ambitious as to defy imagination while at the same time making so many odd alliances, parallels and connections (as with Flores and Winograd) as to be almost unbelievable.
For instance, a communications network over the entire country at a time when the Internet was barely getting going in the USA. Another component was “Cyberfolk” in which users would be issued with a device known as an algedonic meter (Greek for pain/pleasure) to let the central command know how they were doing. It was literally a people meter:
(Algedonic is an unusual word which is first attested by the OED in 1894, but which readers of Gene Wolfe’s Book of the New Sun will recognize as part of the city of Nessus where both can be had. I read these books around 1981-2, remember the word very well, and am delighted to see it pop up here. I’d love to know if Wolfe derived this usage from Project Cybersyn, remembering that he is an engineer who might have read the trade mags where the project was described.)
And as shown at the top of the page, an ops room consisting of seven inward facing chairs (their design influenced by the famous Saarinen Tulip Chair, a version of which made it on the bridge of Star Trek’s Enterprise, with which the Ops room also shares design similarities).
This is the amalgamation of politics and the algorithmic governance of life on the scale of the entire country.
The prime source of information today on Project Cybersyn is Dr. Eden Medina, a professor of Informatics and Computing, and a historian of technology at Indiana University. Dr. Medina has written a book about it called Cybernetic Revolutionaries (2011), drawing on ten years of archival research in the Liverpoool John Moore’s University and interviews she conducted in the early to mid 2000s with surviving project members in Santiago, Chile. She’s also written a very interesting article called “Designing Freedom, Regulating a Nation: Socialist Cybernetics in Allende’s Chile” (2006) available here (pdf).
Here’s a recent talk she gave on “Big Data Lessons from our Cybernetic Past” about Project Cybersyn.
What fascinates me about Project Cybersyn is how it was an early form of algorithmic governance, as Dr. Medina has pointed out (see talk above). Remembering that programmable computers themselves (digital) had only been engineered some 30 years prior (I’m thinking of the Bletchley Park computers such as the Colossus and to a lesser extent the Bombe, which decrypted Enigma), and that Alan Turing’s concept of the universal computer had been invented in 1936, plus the lack of computers in Chile at the time (Medina provides an estimate of fewer than 50 computers in the whole country) it was a highly significant achievement.
At the same time it should be understood as completely in line with the modernist philosophy. Perhaps I depart a little from Dr. Medina’s approach here, in that I would say that it was not revolutionary in terms of its motivating rationality. I don’t mean this to take away from their vision or what they did with scarce resources and imagination. For instance, their communication network was based on a network of telex terminals originally installed to track satellites, rather than (like the-then nascent DARPA Internet), of computers. In fact Dr. Medina reveals that the project had only a single computer!
What I mean is that it comports with a modernist notion of knowledge as saving the day. This is, if you like, the Enlightenment perspective. Why was Foucault for instance so enamored with Kant, and specifically Kant’s piece on the Enlightenment which Foucault described as his (Foucault’s) “fetish text” because he was so obsessed with it? This is “Was ist Aufklarung?” collected along with Foucault’s responses in The Politics of Truth. The answer that Foucault gave, and which I think is essentially correct, is that Kant marks a new turn by analyzing the Enlightenment as asking the question, who are we, today? This is an epistemological question because it asks what are the sorts of knowledges we’ll need in order to see who we are, which is in turn an ontological question. It asks of knowledge what its limits are. Foucault calls this “critique.”
Project Cybersyn is a waystation on the way of this epistemological question. And despite its fascinating technological achievements and the way it applied that to politics and governance it is asking the same question. Although it certainly applied this question of knowledge in very important ways to the question of governance. Today we talk of Big Data and algorithmic governance to refer to approximately the same thing. I think we can understand Project Cybersyn (and other projects such as the Harvard Graphics Lab that I think exemplified some of the same enquiries and epistemologies, about which more in a later post) in that light.
I’d like to thank Dr. Medina for her work on bringing this important and fascinating project to a wider audience.