Author Archives: Jeremy

Why aren’t geographers talking more about robots?


Robbie the Robot generates 480 pints of whiskey overnight

Why aren’t geographers talking more about robots? This question struck me, paradoxically, as I sat on a panel on robots at the last AAG (see del Casino Forthcoming). While this might seem the last place to have this thought, it was prompted by two things. First, the smiles of slightly startled amusement from people when I told them I was on a robot panel, and second, my co-panelists, who I thought were missing out some important terrain about robots.

Putting aside the no doubt justifiable bemusement that the AAG had a robot discussion, the other topics discussed that day dwelt on sexbots, love dolls, cyborgs and the more-than-human. These are part of, but not the whole story, as Rosi Braidotti’s recent book on posthumanism documents (also putting aside how/if more-than-human is different from post- or transhumanism).

For me, the latter are cultural or philosophical issues, and no matter how pertinent and interesting, they leave aside the political-economic, which is what I’m interested in here. Vinny’s piece (which has just been released by PiHG online first) does partially offer to take up this issue. He does this in the context of a report on social geographies, perhaps meaning that the economic and political are marginal for his piece, which nevertheless remains required reading.

What I mean is quite simply issues around automation, artificial intelligence, and computerization. For me, these point to one thing: algorithmic life. One big part of this is the effect on jobs and wages, and therefore we need to do a better job of integrating tech with geographies of the economy.

Or what I called on the panel “Geographies of neoliberal robots.” Everyone probably has seen a version of this graph:

Looking at some of the economic changes between blacks and whites.

The productivity-wage gap

A version appears in Harvey’s book on neoliberalism. The point is that since the advent of the neoliberal era (say 1970s and early 80s) productivity has not failed to climb, but the amount returned to workers has stayed about the same, creating a productivity-wage gap, which in turn widens income inequalities.

As I said on the panel:

Two explanations are usually offered: “robots ate all the jobs” (people are put out of work by automation), or a deliberate political project by a revanchist capitalist elite (Harvey).

These explanations are not mutually exclusive. What is interesting is that automation and robots may no longer be occurring in only unskilled and repetitive jobs. Research suggests jobs that are more routine and less “cognitive” are the most susceptible to automation. A well-known 2013 study at Oxford Martin School estimated that nearly half (47 percent) of US jobs are at risk of automation. Geographers are not immune:


Source: NPR/Oxford Martin School, Univ. Oxford

Things we can do

    1. Given that we have this listing of job susceptibility, it would be nice to get at least a baseline map of where jobs are at stake. How about a county-by-county map of potential automation? Take all jobs per county and multiply them by the relevant figures in the study. It wouldn’t be perfect but would give us a baseline map.
    2. The PC was Time magazine’s “machine of the year” in 1982. But a one-for-one replacement of a human job with a computer job need not be the most important development in automation or intelligent machines. Rather, production may undergo wholesale reorganization. (Brynjolfsson and McAfee make this point in their recent book The Second Machine Age.) Geographers can contribute to our understanding of this by analyzing which industries are susceptible, and where they are located.
    3. Turning to computerization and automation, I mentioned above that these evidence algorithmic life. What I mean by this is very simple, if you follow Tarleton Gillespie’s definition of the algorithm:

      they are encoded procedures for transforming input data into desired output, based on specified calculations (Gillespie 2014: 167)

      Notice here three useful points: encoding, desire, calculation. An algorithm is that which enables desire to proceed by making (performing) the world as calculative. So it is a capacity-making. Here there would be plenty to look at in terms of uneven geographical outcomes of the work algorithms do in the world, for example on tracking and geosurveillance.

      In fact, Rob Kitchin and his group have just published a useful listing of the ways this occurs. One example likely to be of interest to geographers is automated facial recognition. I really think we need to think “beyond the smartphone” as the only way we are tracked to include ALPR, gait observance, wearable devices/Fitbits/smart watches, and Minority Report style live biometric tracking (face|iris|gait). I document some of these in my piece “Collect it All” as does Leszczynski in her “Geoprivacy” overview.

    4. Beside being part of algorithmic governance, drones (and I include commercial drones especially here as they are predicted to far surpass military drone spending) could be an object of geographical enquiry, or what I call “the drone assemblage.”
    5. Read Vinny’s piece for a more general overview of many aspects of robots and intelligent machines.

      “Where have you been? It’s alright, we know where you’ve been!”–Welcome to the Machine, Pink Floyd

Where can tell me who I am?

In September I published a few musings on the topic “Where can tell me who I am.” This was preliminary for a talk at this year’s SEDAAG meetings. Here’s a link to the talk as delivered and the slides I used are here.

Where can tell me who I am (pdf)

New Maps, New Mappings

The following are my “Thanksgiving Reflections” or statement about our group here at the University of Kentucky. We call ourselves the New Mappings Collaboratory and I had some time over the Thanksgiving break to sit down and try to think what we’re about. I shared these with colleagues and we discussed them at our “Map Chat” yesterday (our bi-monthly meeting of the larger group). Since it looks like we may want to push these four points forward (with the addition of Matt Zook’s suggestion of new kinds of agency in an era of Big Data and algorithms) I publish them here for comments and reactions.


The New Mappings Collaboratory is about the creation of new maps:

  1. New forms of maps.
  2. New ways and practices of mapping.
  3. New ways of thinking about maps (new concepts).
  4. New educational encounters with mapping.

It is not just maps as such, but the event of the map (mapping). We are interested in creating spaces where these new mappings can occur, propagate, and multiply.

As such, we hold to a view of the encounter with mapping as one of maximal openness in terms of technologies, data and thought. We wish to explore material and discursive “digiplaces” and the intersection of virtual, actual and material places.

New Mappings Collaboratory was founded in 2011 by Jeremy W. Crampton, Matthew W. Wilson and Matthew Zook on an equal basis. Sue Roberts suggested the word “Collaboratory” to designate the open and mutually assistive nature of the effort, which has no formal membership but rather interesting/ed people who participate on a loose ad hoc basis.

Specific projects

  1. Histories of critical mapping and cartography. How does the “new” arise and what work does it do? Is the new always recognized as such at the time, and is it a matter of distinguishing itself from the old? Perhaps it is not so much a history as a genealogy that would give us a history of the present. We are interested in this “minor” critical history of cartographic thinking. Specific projects include the Harvard Graphics Lab, landscape planning and Ian McHarg, and mapping as governmental technology, as some of the “new lines” of flight.
  2. Non-representational mapping. We are interested in the performativity of mapping; what work mappings do in the world. Under this project we are investigating the nomadic life of the algorithm and “algorithmic governanceaffective value in geospatial startups, hacking and encryption geographies, and the fragility and transience of the (data) archive (how data are stored, accessed, and distributed, how quickly the field changes in terms of pedagogy). [Fragility also refers to] mutability of forms. CartoDB–Leaflet–Mapbox.
  3. Socio-technological. We continue to investigate the lives of spatial Big Data, technologies and politics beyond the geotag, that is, mapping as “unfolding social relations.” With the advent of the Smart City, Big Data, everywhere sensors and the Internet of Things, we are confronted by new ways of doing business, governance and possibilities of living. How do geoprivacy and geosurveillance operate in this new condition?



Perec’s Geographies / Perecquian Geographies

The unaccountably overlooked Georges Perec (member of Oulipo) and author of Espèces d’espaces (Species of Spaces) a great geographical fiction, is the subject of a symposium about his work. The cfp follows:

Interdisciplinary Symposium, University of Sheffield, Friday 6 May, 2016

Georges Perec was one of the most inventive and original geographical writers of the twentieth century. His writing explores cities and streets; homes and apartments; conceptions of space and place; mathematical and textual spaces; imagined, utopian and dystopian spaces; time and the city; landscapes of memory and trauma; consumption and material culture; domestic spaces; everyday life, the everyday, the quotidian; ordinary and ‘infra-ordinary’ places. Perec addressed methods of urban exploration and observation; classification, categorisation and taxonomy; spatial inventories and indexes; and geographical and ethnographic description.

This symposium explores Perec’s Geographies (his own geographical writing) and the wider body of geographical writings and other practices he inspired or speaks to, ranging from novels to travel books, architectural projects and urban expeditions: we call these Perecquian Geographies.

This event will bring together researchers and practitioners from a range of disciplines including Geography, Architecture, French Studies, other Area and Cultural Studies, Town Planning and Architecture, and also engaging practitioners including landscape designers and artists with interests in Perecquian themes.

Readings and Keynotes:
• Karl Whitney, author of Hidden City: Adventures and Explorations in Dublin (Penguin)
• David Matless, Nottingham, author of The Regional Book (Uniform Books)
Speakers Include:
• Amanda Crawley-Jackson, Sheffield
• Charles Forsdick, Liverpool
• Simon Marvin, Sheffield Urban Institute
• Matthew Gandy, Cambridge
• Tim Edensor, Manchester Metropolitan
• Alasdair Pettinger, Glasgow
• Joanne Lee, Sheffield Hallam
• Richard Phillips, Sheffield
• Andrew Leak, UCL
• Peter Jackson, Sheffield

There are also opportunities for additional contributors, either as presenters or discussants. Proposals for contributions are welcome now and until 12 February, 2016, when the schedule will be confirmed. Registration early-bird rate £25 before 28 February, 2016. For expressions of interest, details of how to submit an abstract, register and pay, contact:
Richard Phillips
or Charles Forsdick

Supported by AHRC Translating Cultures and the University of Sheffield, Dept of Geography

How drones use algorithms to govern your life

How do drones use computational methods such as algorithms to govern your life? Here are ten ways.

Many people think (non-military) drones are only used by hobbyists, and then only to fly small Go-Pro cameras around.

This is mistaken.

Following is a partial listing of other ways drones perform algorithmic calculations on people. All of these are already here. The lesson is not that drones can do this and it’s about drones; rather the lesson is that drones are being used in algorithmic governance more generally.

These are examples from my files. Mostly these are non-military/intelligence usages but that distinction is not entirely tenable given the streams of expertise and knowledge between military and non-military drone research.

  1. Drones can assess abnormal or “suspicious” behavior.

    Japanese security company Secom, starting in December, will offer a surveillance service using drones designed to detect and track suspicious vehicles and people. The drones can also take pictures of license plates and intruders’ faces as they enter factory grounds or shops at night….

    Odubiyi said there was urgent need to upgrade and add to the existing 1,000 CCTV cameras in the state to complement the other crime prevention initiatives of the government which include the Security Trust Fund, Street Signage, House Numbering and the provision of three-digit number for emergency calls….

    This scheme [ALPR but could equally well be drones] makes, literally, a state issue out of legal travel to arbitrary places deemed by some — but not by a court, and without due process — to be “related” to crime in general, not to any specific crime.

  2. Drones can monitor the environment using a variety of sensors.

    I watched as the drone’s gas monitoring sensor was checked before the aircraft was launched by catapult for a 20-minute flight around the boundaries of the site….

    A drone can be nearly any size, from as small as an insect to as large as a 757 passenger jet. It can be outfitted with technologies including high-powered cameras, thermal imaging devices, license plate readers, laser radar, and acoustical eavesdropping, see-through imaging, scent detection, and signals interception devices.

  3. Drones can physically and forcibly shepherd you, move you along, or prevent your movements. A variety of levels of force can be used.

    Some in the law enforcement community, but not all, think there may be a time where it may be appropriate to have non-lethal weapons on a drone—such things as tear gas, pepper spray, etc., where a drone will be able to fly into a location where somebody is firing from a concealed position. Or a barricaded person in the drone would be able to drop a canister of pepper spray or tear gas to get a person to come out of hiding.

  4. Algorithms will be used in drone traffic management (UTM).

    Engineers at NASA’s Ames Research Center in Moffett Field, California, are developing UTM cloud-based software tools in four segments of progressively more capable levels. They design each “technical capability level” for a different operational environment that requires development of proposed uses, software, procedures and policies to enable safe operation, with Technical Capability Level One focusing on a rural environment. With continued development, the Technical Capability Level One system would enable UAS operators to file flight plans reserving airspace for their operations and provide situational awareness about other operations planned in the area.

  5. Drones are being used by law enforcement and emergency services.

    For first responders, surveillance teams and investigators, high-quality aerial imagery provides the real-time intelligence needed to assess a situation immediately, ensure safety on the ground, and capture detailed evidence and forensics.

  6. Drones are part of Big Data and data analytics.

    Keep that in mind as you examine the secret ISR study, and you’ll see that the Pentagon’s drone program uses data analytics in almost precisely the same way IBM encourages corporations to use it to track customers. The only significant difference comes at the very end of the drone process, when the customer is killed.

  7. Drones–and robots–are being equipped with algorithms that can predict your next move before you even make it.

    The algorithm, by two University of Illinois researchers, opens the door to software that can guess where a person is headed—reaching for a gun, steering a car into armored gate—milliseconds before the act plays out.

  8. Drones can learn to sense-and-avoid.

    One, Bio Inspired Technologies of Boise, Idaho, is tackling the problem with a hard-wired neural network, a type of device that is good at learning things. This can, the firm’s engineers believe, be trained to recognise and avoid aerial obstacles. Alternatively, a conventional, if high-end, computer can be programmed with algorithms predesigned to recognise and evade threats, by understanding how objects visible to a drone’s camera are moving.

  9. The variety of uses for drones is big and ever-expanding.

    These involved areas as diverse as agriculture (farmers use drones to monitor crop growth, insect infestations and areas in need of watering at a fraction of the cost of manned aerial surveys); land-surveying; film-making (some of the spectacular footage in “Avengers: Age of Ultron” was shot from a drone, which could fly lower and thus collect more dramatic pictures than a helicopter); security; and delivering things…Because drones are cheap, geographers who could never afford conventional aerial surveys are able to use them to track erosion, follow changes in rivers’ sources and inspect glaciers. Archaeologists and historians are taking advantage of software that permits drones fitted with ordinary digital cameras to produce accurate 3D models of landscapes or buildings. This lets them map ancient ruins and earthworks. Drones can also go where manned aircraft cannot, including the craters of active volcanoes and the interiors of caves. A drone operated by the Woods Hole Oceanographic Institution, in Massachusetts, has even snatched breath samples from spouting whales for DNA analysis. And drones are, as might be expected, particularly useful for studying birds.

  10. Drones are surveillant. As such they are ideal for all sorts of new mappings. This raises privacy concerns.

    We need to impose rules, limits and regulations on UAVs as well in order to preserve the privacy Americans have always expected and enjoyed.

What we should realize if that if it can be done it will be done, as long as it is legal (and often that is very much an unknown or grey area).

Our @TheAAG panel on Algorithmic Governance, San Fran

A panel session at the Association of American Geographers Annual Conference, San Francisco, March/April 2016. Organized by Andrea Miller (UC Davis) and Jeremy Crampton (Kentucky).

With Louise Amoore (Durham), Emily Kaufman (Kentucky), Kate Crawford (Microsoft/MIT/NYU), Agnieszka Leszczynski (Auckland), Andrea Miller (UC Davis), Ian Shaw (Glasgow).

“It’s time for government to enter the age of big data. Algorithmic regulation is an idea whose time has come.” Tim O’Reilly.

This panel will address the increasing concern and interest in what we here label “algorithmic governance.” Drawing on Foucault’s governmentality and Deleuzian assemblage theory, as well as the nascent field of critical Big Data studies, we are interested in investigating the manifold ways that algorithms and code/space enable practices of governance that ascribes risk, suspicion and positive value in geographic contexts.

This value often takes the form of money. For instance, Facebook’s average revenue per user (ARPU) in Q2 2015 was $2.76 globally and as much as $9.30 in North America, while, according to Apple, there are over 680,000 apps using location on iOS. However, pecuniary value derived from spatial Big Data must also be understood as inseparable from capacities of risk and suspicion simultaneously generated and distributed through data-driven relationships. More generally, the purpose of these data is two-fold. On the one hand, they allow risk and threats to be managed, and on the other hand, by drawing on these new subjectivities, they increasingly generate new modes of prediction and control. Thus, algorithmic life can be understood as “data + control,” or to use a Foucauldian term, “data + conduct of conduct,” or what we can call “algorithmic governance.”

Following Rob Kitchin’s suggestion that algorithms can be investigated across a range of valences—including examining code, doing ethnographies of coding teams or geolocational app-makers, and exploring algorithms’ socio-technological material assemblages (Kitchin, 2014), we convene this panel to explore some of the following questions in a spatial or geolocational register:

  • How can we best pay attention to the spaces of governance where algorithms operate, and are contested?
  • What are the spatial dimensions of the data-driven subject? How do modes of algorithmic modulation and control impact understandings of categories such as race and gender and delimit the spatial possibilities of what Jasbir Puar has called the body’s “capacity” for emergence, affectivity, and movement (Puar, 2009)?
  • Are algorithms deterministic, or are there spaces of contestation or counter-algorithms?
  • How does algorithmic governance inflect and augment practices of policing and militarization?
  • What are the most productive theoretical tools available for studying algorithmic data, and can we speak across the disciplines?
  • How are visualizations such as maps implicated by or for algorithms?
  • Is there a genealogy of algorithms that can be traced prior to current forms of technology (to a more “proto-GIS” era for example)? How does this tie with other histories of computation?


Kitchin, R. 2014. “Thinking Critically About and Researching Algorithms.” Programmable City Working Paper 5.

O’Reilly, T. 2013. “Open Data and Algorithmic Regulation.” In B. Goldstein and L. Dyson (Eds)., Beyond Transparency. San Francisco: Code for America Press.

Puar, Jasbir. 2009. “Prognosis Time: Towards a Geopolitics of Affect, Debility and Capacity.” Women and Performance, 19.2: 161-172.

Reflections on Philip K. Dick


Amazon have just released their adaptation of Philip K. Dick’s Man in the High Castle. David Gill, at San Francisco State University, was recently interviewed by Salon about the show. He makes some interesting observations that I think bear examination:

The thing it seems to be lacking is the sense of a square peg being pounded into a round hole. In other words, there’s this subtle notion in the novel that the Nazi victory has completely paralyzed the American dream and these people are all struggling to find a new moral compass to guide their lives by. In this, the American Dream has been subverted — so what we’re gonna see in the show is like an American Revolution where they rise against their Nazi oppressors

In other words, the TV version has bottled the basic premise of the book and has instead turned it into a good guys vs. bad guys scenario. He adds:

I’d be okay with that, but it seems they’ve really shifted the focus of the project itself and are really only interested in showing us fascist imagery juxtaposed with American iconography.

This is a pity as Ridley Scott is attached to the project as an Executive Producer which doesn’t bode well for the rumored sequel to Blade Runner.

Perhaps this doesn’t matter as any tv show must be different from a book version for reasons of pacing and drama. Although I do think it’s a mistake to think tv can’t be subtle–I just watched Nicola Walker in two very good shows. River, with Stellan Skarsgård, and Unforgotten with Sanjeev Bhasker. Both were moving and fascinating shows which explored emotional depths with little or no action.

But I happened to ask my GIS class this week if they’d heard of Philip K. Dick. No one had, until I started mentioning movies, some of which they recognized. So what this means is that PKD is little read by the current generation, but more likely get their knowledge via movies and tv. When the subtleties are ironed out, this isn’t necessarily good.

Take Blade Runner itself. Again Gill makes a good observation when he says:

As far as accurately translating his ideas and dynamics onto the screen, I don’t think anyone has been successful. “Blade Runner” inverts the moral of [his novel] “Do Androids Dream of Electric Sheep?” — instead of being a story about how humans can be like androids, it’s about how androids can be like humans.

In some ways this is the more devastating critique. Dick’s novel, Do Androids Dream of Electric Sheep? was not about simulation, but the potential loss of the human, and, more generally of life itself. As key aspect to the book is the loss or lack of actual biological non-human life due to climate change/war and its replacement by artificial replacements. So this was about diminishment of human life as we are no loner able to have relationships with other species (instead there are artificial turtles, spiders, owls etc). Everybody is trying to get off-world as the earth is dying.


Scene from Blade Runner


Beijing, 2013

This is why, the book’s other half of the story, which is not in the movie at all is so critical. This is the idea of salvation or at least escape offered by Mercerism; an invented “religion” based literally on empathy for others (you hold onto empathy boxes which merge your affective body with others). When you grasp the handles of the box, you join communally with Wilbur Mercer, who is on a seemingly never-ending trek up a mountain (iirc). Unfortunately, Mercer is exposed as a fraud, an actor by the name of Al Jerry (think ‘pataphysics and Al Jarry). But this doesn’t matter. It’s not whether Jerry is divine or not, but rather the affective connections he enables or guides. As an android hunter, Deckard is suffering from affective flatness, to such an extent of course that there is a famous indeterminancy about whether he himself is an android. Again, the point is not whether he is or is not. It’s about what people’s/androids/animals affective capacities are–a becoming-animal if you like. (My use of Deleuzian language here is deliberate: PKD, J.G. Ballard and especially Christopher Priest are amazing writers of affect. I’m currently re-reading Priest’s best novel, The Affirmation and suggest you start with that if you’re unfamiliar with Priest.)

I’ll watch the Amazon series when I get a chance (I saw the pilot last year and visually it’s very impressive). I’d recommend River or Unforgotten first though.