Author Archives: Jeremy

Surveillance costs–new study

Shortly after the Edward Snowden revelations began in June 2013 I wrote a Commentary for Society and Space open site on the costs of security.

One of the issues I addressed had to do with the economic and other costs of surveillance:

What does the US actually pay? One attempt at an answer to this surprisingly difficult question was recently provided by the National Priorities Project (NPP). Their estimate was that the US national security budget was $1.2 trillion a year.

A new report by the New America Foundation has further explored the costs of surveillance in terms of lost business opportunities to US companies, US foreign policy and cybersecurity:

  • Direct Economic Costs to U.S. Businesses: American companies have reported declining sales overseas and lost business opportunities, especially as foreign companies turn claims of products that can protect users from NSA spying into a competitive advantage. The cloud computing industry is particularly vulnerable and could lose billions of dollars in the next three to five years as a result of NSA surveillance.
  • Potential Costs to U.S. Businesses and to the Openness of the Internet from the Rise of Data Localization and Data Protection Proposals: New proposals from foreign governments looking to implement data localization requirements or much stronger data protection laws could compound economic losses in the long term. These proposals could also force changes to the architecture of the global network itself, threatening free expression and privacy if they are implemented.
  • Costs to U.S. Foreign Policy: Loss of credibility for the U.S. Internet Freedom agenda, as well as damage to broader bilateral and multilateral relations, threaten U.S. foreign policy interests. Revelations about the extent of NSA surveillance have already colored a number of critical interactions with nations such as Germany and Brazil in the past year.
  • Costs to Cybersecurity: The NSA has done serious damage to Internet security through its weakening of key encryption standards, insertion of surveillance backdoors into widely-used hardware and software products, stockpiling rather than responsibly disclosing information about software security vulnerabilities, and a variety of offensive hacking operations undermining the overall security of the global Internet.

These may end up being upper bounds of the costs (and consequences), but they are very helpful in identifying what is at stake here. I haven’t read the whole report yet, but the executive summary is here (pdf).


The OSS Map Division–new piece in OSS Journal

Crampton OSS Journal

The Journal of the OSS Society has just published a short piece I wrote on the OSS Map Division. It appears in this issue (2014).

This is one of my most unusual publications. I was asked to do this quite a while ago. I think it has turned out nicely, and they used a most intriguing map from the AGS Library archives which I discovered in 2012, thanks to support from the David Woodward Memorial Fellowship in the History of Cartography. Thanks to Charles Pinck, OSS Society President, for his interest.

Updated 8/5/14 to link to pdf as published.

What is neoliberalism?

robinjames (@doctaj) on neoliberalism:

I want to hone in on one tiny aspect of neoliberalism’s epistemology. As Foucault explains in Birth of Biopolitics, “the essential epistemological transformation of these neoliberal analyses is their claim to change what constituted in fact the object, or domain of objects, the general field of reference of economic analysis” (222). This “field of reference” is whatever phenomena we observe to measure and model “the market.” Instead of analyzing the means of production, making them the object of economic analysis, neoliberalism analyzes the choices capitalists make: “it adopts the task of analyzing a form of human behavior and the internal rationality of this human behavior” (223; emphasis mine). (The important missing assumption here is that for neoliberals, we’re all capitalists, entrepreneurs of ourself, owners of the human capital that resides in our bodies, our social status, etc.) [3] Economic analysis, neoliberalism’s epistemontological foundation, is the attribution of a logos, a logic, a rationality to “free choice.”

I particularly like the way she enrolls Big Data and the algorithmic in her understanding of neoliberalism:

Just as a market can be modeled mathematically, according to various statistical and computational methods, everyone’s behavior can be modeled according to its “internal rationality.” This presumes, of course, that all (freely chosen) behavior, even the most superficially irrational behavior, has a deeper, inner logic. According to neoliberal epistemontology, all genuinely free human behavior “reacts to reality in a non-random way” and “responds systematically to modifications in the variables of the environment” (Foucault, sumarizing Becker, 269; emphasis mine).

This approach ties to what others have been saying for a number of years now on the algorithmic (I’m thinking of the work of Louise Amoore on data derivatives, among others) and the calculative (eg., Stuart Elden’s readings of Foucault and Heidegger). I’ve just completed a paper on Big Data and the intelligence community which tries to make some of these points, and Agnieszka Leszczynski and I have a cfp out for the Chicago meetings next year which we certainly hope will include these issues.

(Via this excellent piece on NewApps)

Foucault and Big Data

Very interesting comments on Foucault and Big Data by Frédéric Gros, who is one of the editors of Foucault’s Collège de France lectures:

Foucault’s great studies of disciplinary society are useful above all because they allow us to delineate, through contrast and comparison, the digital governmentality that subjects us to new forms of control, which are less vertical, more democratic and, above all, no longer burdened by any anthropological ballast. Homo digitalis today participates in, is the primary agent of, the surveillance of himself. Digital society is becoming a form of mutualised control. We should today consider the treatment of ‘big data’ working with Foucault, basing ourselves on him, but seeing further than he could. Because we have gone well beyond the disciplinary age. Security’s new concepts are no longer imprisoning individuals and normative consciousness, but rather traceability and algorithmic profiling.

More here. (Via Stuart Elden.)

CFP: Spatial Big Data & Everyday Life (AAG 2015)

Call for Papers: Spatial Big Data & Everyday Life
American Association of Geographers Annual Meeting
21-25 April 2015

Agnieszka Leszczynski, University of Birmingham
Jeremy Crampton, University of Kentucky
“What really matters about big data is what it does” (Executive Office of the President, 2014: 3).

Many disciplines, including the economic and social sciences and (digital) humanities, have taken up Big Data as an object and/or subject of research (see Kitchin 2014). As a significant proportion of Big Data productions are spatial in nature, they are of immediate interest to geographers (see Graham and Shelton 2013). However, engagements of Big Data in geography have to date been largely speculative and agenda-setting in scope. The recently released White House Big Data report encourages movement past deliberations over how to define the phenomenon towards identifying its material significance as Big Data are enrolled and deployed across myriad contexts – for example, how content analytics may open new possibilities for data-based discrimination. We convene this session to interrogate and unpack how Big Data figure in the spaces and practices of everyday life. In so doing, we are questioning not only what Big Data ‘do,’ but also how it is they realize particular kinds of effects and potentialities, and how the lived reality of Big Data is experienced (Crawford 2014).

We invite papers along methodological, empirical, and theoretical interventions that trace, reconceptualize, or address the everyday spatial materialities of Big Data. Specifically we are interested in how Big Data emerge within particular intersections of the surveillance, military, and industrial complexes; prefigure and produce particular kinds of spaces and subjects/subjectivities; are bound up in the regulation of both space and spatial practices (e.g., urban mobilities); underwrite intensifications of surveillance and engender new surveillance regimes; structure life opportunities as well as access to those opportunities; and/or change the conditions of/for embodiment. We intend for the range of topics and perspectives covered to be open. Other possible topics include:

• spatial Big Data & affective life
• embodied Big Data; wearable tech; quantified self
• algorithmic geographies, algorithmic subjects
• new ontologies & epistemologies of the subject
• spatial Big Data as surveillance
• Big Data and social (in)equality
• “ambient government” & spatial regulation
• spatial Big Data and urbanisms (mobilities; smart cities)
• political/knowledge economies of (spatial) Big Data

We welcome abstracts of no more than 250 words to be submitted to Agnieszka Leszczynski ( and Jeremy Crampton ( by August 29th, 2014.

Crawford K (2014) The Anxieties of Big Data. The New Inquiry.

Executive Office of the President (2014) Big Data: Seizing Opportunities, Preserving Values. The White House.

Graham M and Shelton T (2013) Guest editors, Dialogues in Human Geography 3 (Geography and the future of big data, big data and the future of geography).

Kitchin R (2014) Big Data, new epistemologies and paradigm shifts. Big Data and Society (1): In Press. DOI: 10.1177/2053951714528481.




I was reading the Oxford Classical Dictionary (OCD) this morning to look up more details on Euripides’ play Erectheus, which is only survived in some quoted passages in other works, and rather amazingly in some papyrus that was used to wrap a mummy. The reason for this search is the new book by Joan B. Connolly, the Parthenon Enigma, which summarizes her long-standing theory that the frieze on the Parthenon denotes a human sacrifice, discussed in the New Yorker here ($). Connolly uses quotations from the play (among other things) to justify this claim, since in the play the daughters of Erectheus (an early/mythological king of Athens) volunteer to die after an oracle declares only a royal virgin will guarantee victory in war. (I use the third edition OCD, but there is a fourth edition.)

Coincidentally nearby the entry for Erectheus in the OCD is the entry for “Etymology,” which I was intrigued to see was a contested theory in Greek and Roman times, with Socratic debates (in Cratylus) and textbooks (Varro’s De lingua Latina). The two main theories were that words were a matter of convention (nomos) which was opposed by the idea that words bore some natural relationship between sign and signified (physis). The latter view prevailed, according to the OCD.

In Cratylus (still summarizing the OCD), Cratylus argues for physis against Hermogenes, who argues for nomos. The play raises some influential etymological concepts, including the idea that language comes from a few basic building blocks or stoichea (422a).

The entry also discusses Augustine’s De dialectica which may have been based in Varro (116-27BCE). There are some interesting ideas here, and the OCD lists Augustine’s summary of Stoic approaches to etymology and word derivation:

(1) through similarity (a similitudine) with the sound of the word (onomatopoeia), as in the case of balatus the ‘bleating’ of sheep, or with its impression on the senses, as with the harsh-sounding vepres, ‘brambles'; (2) through similarity between one thing and another: so crura, ‘legs’, are named for crux, ‘cross’, because legs are long and hard like a wooden cross; (3) through various forms of proximity (a vicinitate), as with for example horreum, ‘granary’, which is named from the thing it contains, hordeum, ‘barley'; (4) from contrariety (e contrario), as with lucus, ‘a grove’ because minimae luceat, ‘it has little light’, and bellum, ‘war’, because it is not a res bella, ‘a pretty thing.’ Examples of all these types can be found in Varro (OCD entry, Etymology).

This passage affords us some comparisons with Foucault’s discussions of etymology and similarity in the Order of Things, although as far as I know he does not mention either Augustine or Varro in that book. Nevertheless, in Chap. 2 “The Prose of the World,” Foucault outlines four notions of similarity in the 16th Century (“the time when resemblance was about to relinquish its relation with knowledge and disappear”): convenientia (spatial proximity); aemulatio or non-proximal imitation [in the computer world we speak of a computer "emulator"]; analogy, which comprises nearness and farness at the same time; and sympathies [cf. sympathetic magic, as for example in one of the "solutions" to the problem of longitude, where a "powder of sympathy" was proposed that could simultaneously work across the distance between the ship and its home port], a drawing of things together in a movement (hence, change).

What is interesting here is that all of these must be read, even those that appear hidden (coded, encrypted or secret). This reading is done through signs and signatures. This clearly points to the need for knowledge; the knowledge of the adept or initiate and conceivably to a discipline (semiology). You can see here an indication of the mutuality between ciphers and scholars (as well as mantics).

Harvey’s hypothesis: does it apply to Geoweb companies? @profdavidharvey

In his latest book (or one of them) David Harvey offers 17 contradictions he says lie at the heart of capital. Contradiction 8 applies to technology, and Harvey says that there are two main contradictions to do with technology; one to do with technology’s relation to nature and one its relation to labor. It is the latter one he takes up here.

Harvey argues that technology is a means to an end for capital. That end is “profitability and capital accumulation” (p. 102). How does it reach such profitability in the context of technologies?

Throughout its history, capital has invented, innovated and adopted technological forms whose dominant aim has been to enhance capital’s control over labour in both the labour process and the labour market.

There are two elements here. One is innovation or more accurately the innovation process and the need for constant innovation. Here innovation is understood not so much for what it produces (the products and services used) but for what it enables and protects, namely profits. On this view, a site such as c|net devoted to reviewing the products is irrelevant to a proper understanding of today’s society, except insofar as it fuels consumerism and the consumption of labor’s outputs.

The other element is control over labor. This control aims at “disciplining and disempowerment of the worker.” This includes a range of technologically manifested conditions; increasing automation, Taylorism and a factory system, emphasis on productivity, prevention of organized labor, etc.

Harvey’s contradiction then is that if docile labor force is a source of all profit, then replacing it with automation in the workplace will undermine that profit. But the evidence cited by Harvey shows that this is happening; for example increasing computer capacity and speed.

A consequence of the Harvey’s contradiction of falling profit margins is to use labor supplies that are ever-cheaper and productivity-driven. This is familiar to us as back-officing and outsourcing. For example, the iPad factories in China, or Samsung’s reported problems with child labor and suicides. Harvey says we are heading into “dangerous territory” (p. 108).

So all this raises a question of how much of this is occurring at the forefront of geographical innovation, namely the geoweb and geolocational technologies**

Harvey’s argument yields a number of testable hypotheses which could roughly be expressed as:

–are geoweb laborers experiencing increasing “control”? For example code jockeys and so-called code monkeys (see Harvey’s comment about “trained gorillas” p. 103)?
–has productivity experienced constant growth? For example, what is the life-cycle of a product (eg a GIS or web-based geoweb app)?
–have geoweb companies outsourced labor to Asia, Africa?
–on the consumer side, how are geoweb technological innovations marketed?
–are we seeing a decreasing regulatory role over labor in this sector of the economy?

I pose these hypotheses not because I know (or suspect) the answer but genuinely. I don’t know how important you find them, but I find them very interesting. It’s leading me to think that only as hands-on study like an ethnography of these companies would provide the answer, conjoined with some kind of economic overview of the geoweb sector. I don’t know if other geographers would find that interesting enough to fund, but somebody do this study!!

**By geoweb I mean the “constitutive production, governance, and technologies of the merging of georeferenced information with the web, as well as the workers and consumers of networked geographic information situated in a neoliberal economy. This includes many forms of mapping, cartography and GIS.”