Tag Archives: algorithm

Where can tell me who I am?

In September I published a few musings on the topic “Where can tell me who I am.” This was preliminary for a talk at this year’s SEDAAG meetings. Here’s a link to the talk as delivered and the slides I used are here.

Where can tell me who I am (pdf)

Our @TheAAG panel on Algorithmic Governance, San Fran

Untitled
A panel session at the Association of American Geographers Annual Conference, San Francisco, March/April 2016. Organized by Andrea Miller (UC Davis) and Jeremy Crampton (Kentucky).

With Louise Amoore (Durham), Emily Kaufman (Kentucky), Kate Crawford (Microsoft/MIT/NYU), Agnieszka Leszczynski (Auckland), Andrea Miller (UC Davis), Ian Shaw (Glasgow).

“It’s time for government to enter the age of big data. Algorithmic regulation is an idea whose time has come.” Tim O’Reilly.

This panel will address the increasing concern and interest in what we here label “algorithmic governance.” Drawing on Foucault’s governmentality and Deleuzian assemblage theory, as well as the nascent field of critical Big Data studies, we are interested in investigating the manifold ways that algorithms and code/space enable practices of governance that ascribes risk, suspicion and positive value in geographic contexts.

This value often takes the form of money. For instance, Facebook’s average revenue per user (ARPU) in Q2 2015 was $2.76 globally and as much as $9.30 in North America, while, according to Apple, there are over 680,000 apps using location on iOS. However, pecuniary value derived from spatial Big Data must also be understood as inseparable from capacities of risk and suspicion simultaneously generated and distributed through data-driven relationships. More generally, the purpose of these data is two-fold. On the one hand, they allow risk and threats to be managed, and on the other hand, by drawing on these new subjectivities, they increasingly generate new modes of prediction and control. Thus, algorithmic life can be understood as “data + control,” or to use a Foucauldian term, “data + conduct of conduct,” or what we can call “algorithmic governance.”

Following Rob Kitchin’s suggestion that algorithms can be investigated across a range of valences—including examining code, doing ethnographies of coding teams or geolocational app-makers, and exploring algorithms’ socio-technological material assemblages (Kitchin, 2014), we convene this panel to explore some of the following questions in a spatial or geolocational register:

  • How can we best pay attention to the spaces of governance where algorithms operate, and are contested?
  • What are the spatial dimensions of the data-driven subject? How do modes of algorithmic modulation and control impact understandings of categories such as race and gender and delimit the spatial possibilities of what Jasbir Puar has called the body’s “capacity” for emergence, affectivity, and movement (Puar, 2009)?
  • Are algorithms deterministic, or are there spaces of contestation or counter-algorithms?
  • How does algorithmic governance inflect and augment practices of policing and militarization?
  • What are the most productive theoretical tools available for studying algorithmic data, and can we speak across the disciplines?
  • How are visualizations such as maps implicated by or for algorithms?
  • Is there a genealogy of algorithms that can be traced prior to current forms of technology (to a more “proto-GIS” era for example)? How does this tie with other histories of computation?

References

Kitchin, R. 2014. “Thinking Critically About and Researching Algorithms.” Programmable City Working Paper 5. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2515786

O’Reilly, T. 2013. “Open Data and Algorithmic Regulation.” In B. Goldstein and L. Dyson (Eds)., Beyond Transparency. San Francisco: Code for America Press. http://beyondtransparency.org/chapters/part-5/open-data-and-algorithmic-regulation/

Puar, Jasbir. 2009. “Prognosis Time: Towards a Geopolitics of Affect, Debility and Capacity.” Women and Performance, 19.2: 161-172.

Where can tell me who I am

If in the past we could find ourselves if lost, or in a strange location, by asking of those nearby directly or indirectly (eg through social media apps) “who can tell me where I am?” then the condition now is “where can tell me who I am.”

The algorithms that parse and analyze our data shadows to control our horizon of possibilities now depend increasingly on spatial Big Data. As Dan Bouk points out in his recent history of data, the point here is not just to know; to track and to surveill where people go. Additionally algorithms are calculating machines–machinic assemblages–and what they calculate is value, derived not from you, but from your identity as given in the data.

First we are separated from our data in multiple ways, fractionated, scattered (Williams, 2005). “Separated” because we are in an asymmetrical relationship to our data; we do not have the same access to it that algorithms have. Nor the same rights: we are not “customers” of Facebook, but “users.” We do not engage in a point of sale (POS) but rather are the commodities ourselves.

Second as with identity theft what matters is not what you have done, but what your data say you have done. (I’ve received calls from credit collection agencies about “my” spending, and had credit cards charged for visits to hotels I’ve never been to.) These data are then reassembled as dividuals, as Deleuze pointed out some time ago.

Problems with the above: (A) it appears to usher back in an originary individual from whom all else (or at least all data) springs. (B) Talk of rights is suspect:

Do not demand of politics that it restore the “rights” of the individual, as philosophy has defined them. The individual is the product of power. What is needed is to “de-individualize ” by means of multiplication and displacement, diverse combinations. The group must not be the organic bond uniting hierarchized individuals , but a constant generator of de-individualization.
Foucault, Preface to Anti-Oedipus

But that “originary” individual can be multiple too, and in relations of power. The insight of the dividual is that it is a derivative that gets traded down the line (Amoore). Its value is purely notional and can undergo crisis, whence the possibility of counter-memory. The assemblage/desiring-machine is not mechanical (it doesn’t “work” in that sense). But there are “regimes of veridiction” that constitute it and as Foucault points out the market is one such site (Foucault, 17 Jan 1979). So our task might consist in part of elucidating the market that is the site for the intersection of value, spatial Big Data, and the spatial algorithm.

[First thoughts toward an intro for Spatial Big Data and Everyday Life for Big Data & Society]

Project Cybersyn and the Origins of Algorithmic Life

One of the left’s commonly accepted stories about neoliberalism is that it got some of its first real-world tests in Pinochet’s Chile in the early 1970s. Following a coup and the violent end to socialist Salvador Allende’s government (in which Allende took his own life in the Presidential Palace), probably with the active assistance of the CIA, General Pinochet “invited” in the so-called “Chicago boys,” a group of neoliberal economists from the University of Chicago led by Milton Friedman.

The story is most canonically told in Naomi Klein’s 2010 book Shock Doctrine, in which she argues that disasters and crises were exploited by Friedman to usher in their practices of neoliberalism, privatization, and the free market economy. David Harvey’s influential text, a Brief History of Neoliberalism makes many of the same arguments.

What these arguments miss however, is an earlier development during Allende’s government itself, in which he too invited in a foreign expert to help run his economy. Except the expert who was brought in was not an economist, but a cybernetician, a British man named Stafford Beer. Beer established a partnership with Allende, and his Minister of the Economy, a man named Fernando Flores (who later would go on to write a well known computing text with  Terry Winograd called Understanding Computers and Cognitiona book I tried to read in grad school in the 1980s but failed to do so. (I was into AI, Douglas Hofstadter, etc at the time). Their goal was nothing less than the integration of cybernetics–the science of control and governance–with the running of the Chilean economy, anticipatory rather than reactionary planning, and the collection, transmission and correlation of information in real time. It was called Project Cybersyn or Proyecto Synco in Spanish.

Almost everything about this project is fantastic. What they achieved, and perhaps even more importantly, what they envisioned, was so ambitious as to defy imagination while at the same time making so many odd alliances, parallels and connections (as with Flores and Winograd) as to be almost unbelievable.

For instance, a communications network over the entire country at a time when the Internet was barely getting going in the USA. Another component was “Cyberfolk” in which users would be issued with a device known as an algedonic meter (Greek for pain/pleasure) to let the central command know how they were doing. It was literally a people meter:

(Algedonic is an unusual word which is first attested by the OED in 1894, but which readers of Gene Wolfe’s Book of the New Sun will recognize as part of the city of Nessus where both can be had. I read these books around 1981-2, remember the word very well, and am delighted to see it pop up here. I’d love to know if Wolfe derived this usage from Project Cybersyn, remembering that he is an engineer who might have read the trade mags where the project was described.)

And as shown at the top of the page, an ops room consisting of seven inward facing chairs (their design influenced by the famous Saarinen Tulip Chair, a version of which made it on the bridge of Star Trek’s Enterprise, with which the Ops room also shares design similarities).

 

This is the amalgamation of politics and the algorithmic governance of life on the scale of the entire country.

The prime source of information today on Project Cybersyn is Dr. Eden Medina, a professor of Informatics and Computing, and a historian of technology at Indiana University. Dr. Medina has written a book about it called Cybernetic Revolutionaries (2011), drawing on ten years of archival research in the Liverpoool John Moore’s University and interviews she conducted in the early to mid 2000s with surviving project members in Santiago, Chile. She’s also written a very interesting article called “Designing Freedom, Regulating a Nation: Socialist Cybernetics in Allende’s Chile” (2006) available here (pdf).

Here’s a recent talk she gave on “Big Data Lessons from our Cybernetic Past” about Project Cybersyn.

What fascinates me about Project Cybersyn is how it was an early form of algorithmic governance, as Dr. Medina has pointed out (see talk above). Remembering that programmable computers themselves (digital) had only been engineered some 30 years prior (I’m thinking of the Bletchley Park computers such as the Colossus and to a lesser extent the Bombe, which decrypted Enigma), and that Alan Turing’s concept of the universal computer had been invented in 1936, plus the lack of computers in Chile at the time (Medina provides an estimate of fewer than 50 computers in the whole country) it was a highly significant achievement.

At the same time it should be understood as completely in line with the modernist philosophy. Perhaps I depart a little from Dr. Medina’s approach here, in that I would say that it was not revolutionary in terms of its motivating rationality. I don’t mean this to take away from their vision or what they did with scarce resources and imagination. For instance, their communication network was based on a network of telex terminals originally installed to track satellites, rather than (like the-then nascent DARPA Internet), of computers. In fact Dr. Medina reveals that the project had only a single computer!

What I mean is that it comports with a modernist notion of knowledge as saving the day. This is, if you like, the Enlightenment perspective. Why was Foucault for instance so enamored with Kant, and specifically Kant’s piece on the Enlightenment which Foucault described as his (Foucault’s) “fetish text” because he was so obsessed with it? This is “Was ist Aufklarung?” collected along with Foucault’s responses in The Politics of Truth. The answer that Foucault gave, and which I think is essentially correct, is that Kant marks a new turn by analyzing the Enlightenment as asking the question, who are we, today? This is an epistemological question because it asks what are the sorts of knowledges we’ll need in order to see who we are, which is in turn an ontological question. It asks of knowledge what its limits are. Foucault calls this “critique.”

Project Cybersyn is a waystation on the way of this epistemological question. And despite its fascinating technological achievements and the way it applied that to politics and governance it is asking the same question. Although it certainly applied this question of knowledge in very important ways to the question of governance. Today we talk of Big Data and algorithmic governance to refer to approximately the same thing. I think we can understand Project Cybersyn (and other projects such as the Harvard Graphics Lab that I think exemplified some of the same enquiries and epistemologies, about which more in a later post) in that light.

I’d like to thank Dr. Medina for her work on bringing this important and fascinating project to a wider audience.

Esri introduces smart mapping

My colleague Mark Harrower, who is now at Esri, recently posted a blog story announcing Esri’s entry into what they are calling “smart mapping.”

The term itself is perhaps more interesting than the particulars of the technology Mark is talking about, although these are of course still important to understand. It draws from and wishes to leverage the whole assemblage of “smart” devices such as watches, TVs, cars, Nest thermostats and so on, as well as the rhetoric around smart cities, algorithmic governance, and Big Data.

Just to be clear, smart mapping, as a piece of terminology is not new. There’s a company in the UK with that name which says it has been round for 15 years, and another one called SmartMAP which says it has been around since 1995, part of a GIS company in Delaware.

In Esri’s case, Mark says that the idea is to provide your mapping tools with some capability to assess your data and recommend better ways to represent it:

Unlike ‘dumb’ software defaults that are the same every time, with smart defaults we now offer the right choices at the right time. When we see your data in the map viewer, we analyze it very quickly in a variety of ways so that the choices you see in front of you are driven by the nature of your data, the kind of map you want to create, and the kind of story you want to tell (e.g., I want to show places that are above and below the national average)

I find it interesting, if perhaps inevitable, that companies are appealing to the concept of “smart” mapping. “Making things better with algorithms” could easily be the slogan applied to many companies seeking an edge these days with their “disruptive” (but not too disruptive) innovation.

Perhaps the question is not whether these really are smart, as why  we think they are, why we like that, and what effects they will have on mapping practices?