Category Archives: Technology

Why aren’t geographers talking more about robots?


Robbie the Robot generates 480 pints of whiskey overnight

Why aren’t geographers talking more about robots? This question struck me, paradoxically, as I sat on a panel on robots at the last AAG (see del Casino Forthcoming). While this might seem the last place to have this thought, it was prompted by two things. First, the smiles of slightly startled amusement from people when I told them I was on a robot panel, and second, my co-panelists, who I thought were missing out some important terrain about robots.

Putting aside the no doubt justifiable bemusement that the AAG had a robot discussion, the other topics discussed that day dwelt on sexbots, love dolls, cyborgs and the more-than-human. These are part of, but not the whole story, as Rosi Braidotti’s recent book on posthumanism documents (also putting aside how/if more-than-human is different from post- or transhumanism).

For me, the latter are cultural or philosophical issues, and no matter how pertinent and interesting, they leave aside the political-economic, which is what I’m interested in here. Vinny’s piece (which has just been released by PiHG online first) does partially offer to take up this issue. He does this in the context of a report on social geographies, perhaps meaning that the economic and political are marginal for his piece, which nevertheless remains required reading.

What I mean is quite simply issues around automation, artificial intelligence, and computerization. For me, these point to one thing: algorithmic life. One big part of this is the effect on jobs and wages, and therefore we need to do a better job of integrating tech with geographies of the economy.

Or what I called on the panel “Geographies of neoliberal robots.” Everyone probably has seen a version of this graph:

Looking at some of the economic changes between blacks and whites.

The productivity-wage gap

A version appears in Harvey’s book on neoliberalism. The point is that since the advent of the neoliberal era (say 1970s and early 80s) productivity has not failed to climb, but the amount returned to workers has stayed about the same, creating a productivity-wage gap, which in turn widens income inequalities.

As I said on the panel:

Two explanations are usually offered: “robots ate all the jobs” (people are put out of work by automation), or a deliberate political project by a revanchist capitalist elite (Harvey).

These explanations are not mutually exclusive. What is interesting is that automation and robots may no longer be occurring in only unskilled and repetitive jobs. Research suggests jobs that are more routine and less “cognitive” are the most susceptible to automation. A well-known 2013 study at Oxford Martin School estimated that nearly half (47 percent) of US jobs are at risk of automation. Geographers are not immune:


Source: NPR/Oxford Martin School, Univ. Oxford

Things we can do

    1. Given that we have this listing of job susceptibility, it would be nice to get at least a baseline map of where jobs are at stake. How about a county-by-county map of potential automation? Take all jobs per county and multiply them by the relevant figures in the study. It wouldn’t be perfect but would give us a baseline map.
    2. The PC was Time magazine’s “machine of the year” in 1982. But a one-for-one replacement of a human job with a computer job need not be the most important development in automation or intelligent machines. Rather, production may undergo wholesale reorganization. (Brynjolfsson and McAfee make this point in their recent book The Second Machine Age.) Geographers can contribute to our understanding of this by analyzing which industries are susceptible, and where they are located.
    3. Turning to computerization and automation, I mentioned above that these evidence algorithmic life. What I mean by this is very simple, if you follow Tarleton Gillespie’s definition of the algorithm:

      they are encoded procedures for transforming input data into desired output, based on specified calculations (Gillespie 2014: 167)

      Notice here three useful points: encoding, desire, calculation. An algorithm is that which enables desire to proceed by making (performing) the world as calculative. So it is a capacity-making. Here there would be plenty to look at in terms of uneven geographical outcomes of the work algorithms do in the world, for example on tracking and geosurveillance.

      In fact, Rob Kitchin and his group have just published a useful listing of the ways this occurs. One example likely to be of interest to geographers is automated facial recognition. I really think we need to think “beyond the smartphone” as the only way we are tracked to include ALPR, gait observance, wearable devices/Fitbits/smart watches, and Minority Report style live biometric tracking (face|iris|gait). I document some of these in my piece “Collect it All” as does Leszczynski in her “Geoprivacy” overview.

    4. Beside being part of algorithmic governance, drones (and I include commercial drones especially here as they are predicted to far surpass military drone spending) could be an object of geographical enquiry, or what I call “the drone assemblage.”
    5. Read Vinny’s piece for a more general overview of many aspects of robots and intelligent machines.

      “Where have you been? It’s alright, we know where you’ve been!”–Welcome to the Machine, Pink Floyd

Crypto-geographies and the Internet of Things

Secret codes have long fascinated people. According to Secret History, a new history of cryptology by Craig Bauer, who was Scholar-In-Residence at the NSA Center for Cryptologic History in 2011-12, cryptography predates the Greeks. Many of these ciphers were relatively simple by today’s standards, involving either transposition or substitution (respectively systems where the letters are moved but not replaced, and where the letters are replaced, eg., A is replaced by Z, etc).

The now fairly well-known Enigma machine, deciphered by British scientists at Bletchley Park (and the subject of many books and a couple of movies) is pictured above. This was a German system of ciphering, used by the German Nazi regime during WWII. Less well-known (but undeservedly so) are the decryptions by the NSA and its predecessor group (The US Army Signals Intelligence Service located at Arlington Hall, a former girl’s school in Virginia) of the so-called Venona traffic. Venona refers to the project to decrypt Soviet diplomatic communications with its agents in the USA and elsewhere. These encrypted messages often referred to codenames of American spies working for the Soviets during the war. With the help of investigations by the FBI the US government was able to identify many of these people, based on the partial decryptions. According to the NSA and most (but not all) historians, these included Julius and Ethel Rosenberg, Klaus Fuchs, and several serving OSS personnel.

The Soviets were tipped off to the fact that the US was decrypting their messages (probably by Kim Philby, the British spy who was posted to the US for a time), and stopped using their one-time encryption pads. Nevertheless the project to decrypt the messages continued until the early 1980s, eventually yielding about 2,900 partially decrypted messages. They remained a closely guarded secret long after their operational worth had dwindled, and it was only with the publication in 1987 of Spycatcher, by Peter Wright, a former British intelligence officer, that the project was referred to by its codename in public. (Publication of Spycatcher was embargoed by Margaret Thatcher’s government in the UK, but Wright succeeded in publishing it in Australia anyway.)

Some terms: “Cryptography” is the science (and art) of creating ciphers. “Cryptanalysis” is the effort of deciphering them without the key. “Cryptology” is both of these, to include the assessment of the security of a cipher, comparing ciphers and so on. The words are Greek from kryptos (κρυπτός) meaning hidden, secret.

Is there such a thing as cryptologic geographies? If not, could there be, and of what would it consist? In other words, are there (non-trivial) geographies of encryption? Here are some ideas.

One of my earliest ideas of this was a geography of https, the secure version of web-browsing (now coming into vogue but still greatly variable). The New York Times recently laid down a challenge to make https default by the end of 2015 if other media companies would do the same. This is non-trivial, because if encrypted messages are more secure than non-encrypted ones, then the latter will reveal weaknesses in the internet. These weaknesses could be exploited. Second, if you are sending emails and other communications over the internet in non-encrypted form, then this is easier for governments to intercept and monitor.

And this is not just to do with messages you write, but also other parts of the personal datastream. For example, your location. What if you could record, but encrypt your geolocation to take advantage of services offered by apps (eg Google Maps) in such a way that they could not be intercepted, decrypted and exploited by third parties (including the government)? Would this mean that the web and internet would “go dark” as officials warn? And would criminals and terrorists be afforded protection in those dark spaces? That was certainly the message of the Attorney General and the FBI Director a few days ago in response to plans by Apple and Google to implement better encryption. AG Holder:

said quick access to phone data can help law enforcement officers find and protect victims, such as those targeted by kidnappers and sexual predators.

Justice Department officials said Holder is merely asking for cooperation from the companies at this time.

And how universal would this advantage to users, potential criminals and law enforcement be? And would those places where one of these had an advantage necessarily overlap with the others? That is, what would be the differential access to encryption from place to place or group to group–a digital divide of encryption?

Is there a political economy of encryption? Who are the companies and individuals working on encryption in the commercial sector? To what extent is there movement between the private and public sectors of both cryptology expertise and personnel? Further, to what extent is there better crypotography in the government and intelligence community than there is in the commercial sector? What are the implications of allowing backdoors to encryption algorithms that can “only” be broken by the government but not by third parties? (I’m thinking here of the well-known proposal in the 1990s for the “Clipper Chip” which allowed just such a backdoor for the NSA but was met with such opposition that it was not implemented.) Is such a backdoor safe from third party hacking, and if so, for how long? (And what is an acceptable definition of “safe” here?). A geographical analysis of these questions would imply some access to where and who has installed the systems in question, which might be provided by basic research efforts such as those carried out at the Oxford Internet Institute by Mark Graham and his colleagues.

Do other computer systems have vulnerabilities? That is, ones without designed-in backdoors? If so, where are they? When it comes to exploits and vulnerabilities, what are the implications of announcing them vs. hoarding them (eg, so-called zero-day exploits)? Is there differential access to knowledge about exploits and vulnerabilities? Where? Again, who makes money off this? What is the crypto- value-chain?

Speaking of hacking; there are a huge array of secret attempts (and thus crypto- if not cryptologic) to break into, disrupt, or exploit systems (and an equally expansive range of countermeasures). The Department of Defense has estimated there may be up to 10 million hacking attacks per day. Most of these are probably automated scans, according to Adam Segal, a cybersecurity expert at the Council on Foreign Relations.

What systems are vulnerable to these exploits, and what exploits are being carried out? Here we could examine mundane events such as DDOS, where antagonists attempt to bring down a web server to deny its proper function, to more exotic events such as the US/Israeli Stuxnet virus meant to disrupt Iranian nuclear programs (but which had effects well beyond Iran once the virus was in the wild). (For more on this virus/worm, see the Stuxnet Dossier [pdf] compiled by Symantec.)

We often hear in the news that certain countries (Russia, China) are more responsible for intrusions and exploits than others, but I’m not aware of any detailed work on this sort of cryptogeography. The recent JP Morgan vulnerability affected more than 83 million US households (who? why?), according to the NYT, and actually included another 9 banks not previously reported. The NYT also said the attack was carried out by hackers having “at least loose connections with officials of the Russian government.” But that is a very imprecise and sketchy account. Just recently, a new poll showed bipartisan low levels of confidence among Americans in the “government’s ability to protect their personal safety and economic security.” Here government is arguably failing at its job of providing security. Ferguson and domestic homicides were mentioned specifically in the AP story. Do people feel threatened by the JP Morgan hacks, the Target and other breaches?

There is surely a whole economy of knock-on effects that result from this; so again, we can speculate about a political economy of crytogeographies.

What would a better map of hacking attempts look like? Security companies and telcos track these data, as for example in this map created by Norse which describes itself as “a global leader in live attack intelligence.” Who is this company? How do they earn their money? More importantly, what is the nature of this market sector more generally?

(Click for live version.)

The above map however is to a large extent a misrepresentation because it only shows attacks on their honeypots, not the entirety of the internet, or even the entirety of a particular region or network.

A similar visualization, again covering the globe by country, is offered by Kaspersky Labs.

(Click for live version)

These are not per se all that analytically valuable, although they are visually striking (if somewhat derivative).

What do these attacks do, and to whom do they do it? It would be interesting to do a geopolitical analysis of the Stuxnet worm here, which has received a fair amount of coverage. Stuxnet would make an interesting case study, although it remains to be seen how representative it is (being created by state actors against the nuclear capabilities of another state). As stated above, most attacks are undirected and opportunistic. A Congressional Research Services (CRS) Report on Stuxnet examined the national security implications of the attack, and of course there is a long history of the study of cyberattacks and cyberwarfare going back several decades. But I’m not aware that geographers have contributed to this literature in a geopolitical sense.

For some, these concerns are especially paramount in the context of smart cities, big data and automated (“smart”) controls–including the so-called smart grid and the Internet of Things (IoT). Take utilities and smart meters for instance. There are minimally two concerns–that hackers could access smart controls and take command of critical infrastructure, and second, that data held in smart meters may be legally accessible under surveillance laws by the government. Another CRS report in 2012 warned that current legislation “would appear to permit law enforcement to access smart meter data for investigative purposes under procedures provided in the SCA, ECPA, and the Foreign Intelligence Surveillance Act (FISA)”. Although we hear a lot about surveillance of phone and internet communications, there is as yet much less on surveillance of other big data sources. Luckily I have a paper coming out on that topic but needless to say much more needs to be done.

Cryptologic geographies would appear to be a fertile field for investigation. Broadly conceived to include geopolitical implications, big data, regulation and policy, governance, security, the Internet of Things, cybergeographies, and justice, there is a need for intervention here to both clarify our understanding, and intervene in policy and political debate. Certainly other scholars are already doing so (eg., Internet Governance Project paper on whether cyberwarfare is a new Cold War, pdf).

The mass of connected computer systems and devices known as the Internet of Things will surely only intensify issues of security, encryption and governance. The crypto-geographies of these are highly important to sort through. This post is an attempt to highlight what issues are at stake and to provide some initial ideas.

CFP: Spatial Big Data & Everyday Life (AAG 2015)

Call for Papers: Spatial Big Data & Everyday Life
American Association of Geographers Annual Meeting
21-25 April 2015

Agnieszka Leszczynski, University of Birmingham
Jeremy Crampton, University of Kentucky
“What really matters about big data is what it does” (Executive Office of the President, 2014: 3).

Many disciplines, including the economic and social sciences and (digital) humanities, have taken up Big Data as an object and/or subject of research (see Kitchin 2014). As a significant proportion of Big Data productions are spatial in nature, they are of immediate interest to geographers (see Graham and Shelton 2013). However, engagements of Big Data in geography have to date been largely speculative and agenda-setting in scope. The recently released White House Big Data report encourages movement past deliberations over how to define the phenomenon towards identifying its material significance as Big Data are enrolled and deployed across myriad contexts – for example, how content analytics may open new possibilities for data-based discrimination. We convene this session to interrogate and unpack how Big Data figure in the spaces and practices of everyday life. In so doing, we are questioning not only what Big Data ‘do,’ but also how it is they realize particular kinds of effects and potentialities, and how the lived reality of Big Data is experienced (Crawford 2014).

We invite papers along methodological, empirical, and theoretical interventions that trace, reconceptualize, or address the everyday spatial materialities of Big Data. Specifically we are interested in how Big Data emerge within particular intersections of the surveillance, military, and industrial complexes; prefigure and produce particular kinds of spaces and subjects/subjectivities; are bound up in the regulation of both space and spatial practices (e.g., urban mobilities); underwrite intensifications of surveillance and engender new surveillance regimes; structure life opportunities as well as access to those opportunities; and/or change the conditions of/for embodiment. We intend for the range of topics and perspectives covered to be open. Other possible topics include:

• spatial Big Data & affective life
• embodied Big Data; wearable tech; quantified self
• algorithmic geographies, algorithmic subjects
• new ontologies & epistemologies of the subject
• spatial Big Data as surveillance
• Big Data and social (in)equality
• “ambient government” & spatial regulation
• spatial Big Data and urbanisms (mobilities; smart cities)
• political/knowledge economies of (spatial) Big Data

We welcome abstracts of no more than 250 words to be submitted to Agnieszka Leszczynski ( and Jeremy Crampton ( by August 29th, 2014.

Crawford K (2014) The Anxieties of Big Data. The New Inquiry.

Executive Office of the President (2014) Big Data: Seizing Opportunities, Preserving Values. The White House.

Graham M and Shelton T (2013) Guest editors, Dialogues in Human Geography 3 (Geography and the future of big data, big data and the future of geography).

Kitchin R (2014) Big Data, new epistemologies and paradigm shifts. Big Data and Society (1): In Press. DOI: 10.1177/2053951714528481.


Contractor receives $400K federal funds for automatic license plate reading

According to reporting by Bloomsberg News the IRS, the Forest Service and the U.S. Air Force’s Air Combat Command have awarded a contractor over $400,000 in contracts for its automated licence plate recognition (ALPR) system since 2009.

It’s not clear if the contracts to Vigilant Solutions are ongoing, given the context that Homeland Security dropped similar plans in February of this year following widespread opposition form civil liberties groups.

“Especially with the IRS, I don’t know why these agencies are getting access to this kind of information,” said Jennifer Lynch, a senior staff attorney with the Electronic Frontier Foundation, a San Francisco-based privacy-rights group. “These systems treat every single person in an area as if they’re under investigation for a crime — that is not the way our criminal justice system was set up or the way things work in a democratic society.”

Other countries (including the UK) have long had such systems in place.

If you go to the Vigilant website they have a long complaining blog post about the lies and distortions by civil liberties groups:

License plate readers are under siege nationwide, thanks to a well-funded, well-coordinated campaign launched by civil liberties groups seeking to take advantage of the growing national debate over surveillance. 

Unfortunately, the campaign led by the American Civil Liberties Union (ACLU) has deliberately clouded and even omitted those facts.

According to this article, Vigilant actually successfully used the First Amendment to overturn an anti license-plate recognition law in Utah:

Vigilant Solutions and DRN [Digital Recognition Network] sued the state of Utah on constitutional grounds, arguing that the law infringed on the First Amendment right to take photographs of public images in public places, a right that everyone in Utah shares.

The law was overturned, but Vigilant com,plains that state agencies were then barred from using any of the data collected, impacting their profits. They also complain about data retention limits.

What’s also interesting about companies such as this is that they illustrate the argument for understanding policing and military together (see this blog post by Derek Gregory for example).

Technology is political because politics includes the technological

I’ve been thinking some more on Stuart’s apothegm that I reblogged yesterday about the relationship between politics and the technical. Here it is again from his post:

One of the previous presenters had made the claim that there was nothing political about some of the techniques. While I made the comment that we could say that there is always a politics to the technical, I was most interested in turning his claim around, rather than disagreeing with it: suggesting that the political is always technical. I’ve made this claim before in relation to territory as a political technology, as dependent on all sorts of techniques for measuring land and controlling terrain.

What this does, for me, is rather than to dogmatically oppose people who say that the technical is not political (a position I’ve encountered frequently in reading and writing about mapping for instance) it opens up possibilities for investigation. How is the political technical? What does that mean? What technologies are involved? Specifically exciting is that it invites on to the field of inquiry (I can imagine) all sorts of cartographies and mappings. It has the secondary purpose of enrolling the first speaker in the search as well, rather than disallowing their position or telling them they’re wrong; hence of actually persuading people, assuming you do a good job of demonstrating how the political is technical.

Long-time readers of Stuart’s work are well familiar with this move of his. In Mapping the Present (2001), the first major book of his I read, he makes a similar argumentative move, claiming that “space is political because the political is spatial” (eg., p. 6). This might seem at first to be a tautology, but Stuart is not equating the two per se. What he’s getting at, is that in order to understand how mapping is political (say), you have to understand how the political enrolls technologies such as mapping. That’s your opening into the circle.

Another one from the same book is the need to both “historicize space and spatialize history” (p. 3). In this case there’s more of a productive tension or dialectic perhaps.

One of my favorites, that again is disarmingly provocative, is when Stuart does a reversal of the usual understanding of territory and territoriality. Where typically we have seen people starting with notions of territoriality and saying that this produces territory, Stuart reverses this (Elden, 2010). Later he says “In other words, while particular strategies or practices produce territory, there is a need to understand territory to grasp what territoriality, as a condition of territory, is concerned with” (2010, p. 13).

This allows him to inquire what is territory historically, or slightly more precisely to provide a genealogical account, defined as “a historical interrogation of the conditions of possibility of things being as they are” or a history of the present (Elden, 2010, p. 2).

(Vikki Bell recently gave a similar definition of genealogy “the idea that when we’re studying things historically, we’re doing so in order to study the values that we hold today. So genealogy submits our present truths to historical scrutiny and locates them at the level of practices, asking what’s happened.” From a BBC interview on the program “Thinking Allowed.”)

All of these are aspects of his claim that “territory is a political technology” explored in most depth in his most recent book The birth of territory (2013).

Elden, S. 2010. “Land, Terrain, Territory.” Progress in Human Geography. DOI: 10.1177/0309132510362603.

(12/18/13 Updated to correct citation.)

Bolivia, Snowden, and the Politics of Verticality

The extraordinary events of the last day–blow-by-blow live blog here from the Guardian–have certainly raised plenty of legal and diplomatic issues. What is the legality of diverting a head of state’s official plane, or even refusing airspace, despite the plane reporting being low on fuel? Did the US pressure European countries, especially France, Portugal, Spain and Italy to refuse landing rights on the suspicion that Edward Snowden was on board the Bolivian president’s flight?

Bolivia has labeled this an “act of aggression” and if the head of state’s plane counts as sovereign territory–in the way an embassy does–then they may well be justified in seeking some satisfaction (RT is reporting they will complain to the UN).

In not unrelated news, as the Guardian puts it, Ecuador will today announce who they think is behind the “bug” they found in their embassy in London last month. This is the embassy where Julian Assange has been granted political asylum for fears the US will extradite him to face charges of publishing leaks.

As several people have pointed out, this refusal of overflying airspace is in marked contrast to the extraordinary rendition permissions:

But I think this presents a great example of what several people, including Pete Adey and Stuart Elden, are calling the “politics of verticality,” a term attributed to Eyal Weizman in 2002. See this paper by Adey, Mark Whitehead and Alison J. Williams in Theory, Culture & Society for example. They ask specifically what is the nature of an “air target” (on the ground, but after last night’s events presumably also a target in the air); what cultural practices make up the air target; and finally what are the affective rationalities involved?

If their paper is more about targeting (from the air), last night’s events prompt us to reverse that and also enquire about aerial targets and vertical geopolitics.


Trevor Paglen in the New Yorker

Nice profile on Trevor Paglen, the artist (and geographer) in the New Yorker about his work photographing spy satellites and military bases.

2012 Secrecy Report has published its annual secrecy report. Among its highlights:

Federal Circuit Court Whistleblower Decisions: 3-226 Against Whistleblowers
Classified Information
• National/Military Intelligence Budgets Disclosed
• Security-Cleared Population Reaches New Reported High
• Original Classification Decisions Fall by 44%; Lowest Since 1996
• $215 Spent Keeping Secrets for Every Dollar Spent on Declassification
• National Declassification Center—Progress Made, But Goal Not Reached
• Success of Mandatory Declassification Leads to 8% Growth in Requests, Continued Rise in Backlogs
• Classification Challenges Plummet by 90%
• State Secrets Privilege Policy: Impact Unclear, IG Referrals Unknown
Invention Secrecy Orders in Effect Rise by 2%
Use of National Security Letters Continues to Increase
Foreign Intelligence Surveillance Court (FISC) Approvals Rise 11%

The Report is produced by more than 80 different groups which advocate for more open government, and has a steering committee comprising such luminaries as Steven Aftergood of the Federation of American Scientists (FAS) and Tom Blanton of the National Security Archive.

The Guardian’s Activate Summit, NYC: Changing the world through technology

The Guardian newspaper is hosting a one-day summit in New York City to explore how with “technology and the internet, we can make the world a better place.” It’s called Activate.

No doubt this raises a number of questions (eg is technology making the world a better place; is it making it a better place in only some ways and for some people, foreclosing other possibilities; and have we frankly gone down the wrong path with some technologies?) but putting those aside as the fears of a typical academic wet blanket, it’s worth noting the topics included.

I was especially interested to see Evgeny Morozov is listed as speaking on geopolitics. He’s currently a visiting scholar at Stanford and author of The Net Delusion: The Dark Side of Internet Freedom. Morozov was prominent for his blogging about the Tahrir Square protests in Egypt and has written more widely on his doubts about technology effecting social change.

But I wonder why there aren’t more social scientists on the agenda? Is it that we aren’t thought of as working for social change, only for understanding and explaining it? That we’re not activists?