Category Archives: Big Data

Building age maps

This was the year the building age map broke into the mainstream. Or maybe I’m just saying that because I chose to use the challenge of making one as the Final Project in my intro GIS and mapping class. Nevertheless, there are increasingly many examples available. In my estimation, their mix of colors provoke a captivating aesthetic almost like a Chagall stained glass window.

The first one to draw my attention was the Dutch building age map. Covering the entire country, it includes an incredible 9.8 million buildings!

The latest one is also interesting, though a read of the small print reveals it is not strictly a building age map.

ScreenClip

“Modal” building age

This is the area around Cheltenham and Bishop’s Cleeve where my mum lives. Here, blue indicates older housing and red is newer (an opposite scheme to the Dutch map, and one that doesn’t strictly conform to cartography textbooks–or Edward Tufte’s guidelines of one symbolic dimension for each data dimension). Beware this note however:

Important note: Classifications are an average across the local area, rather than for individual houses, therefore the colour coding on a building is not necessarily indicative of that building.

There’s also a lot of missing data where building age is not captured. Looks like the Brits still have some way to go to catch up with the Dutch!

In my class project, students created an overall map of Lexington, KY building ages as a first step. For the second step, they chose a block or hyper-local area and supplemented it with other data they collected or created. This way the project balanced a sense of structure with a sense of freedom (students are often stymied by too much freedom and revert to mundane projects such as mapping bike racks).

It’s tough to choose an example here as they were all very good (and we’ll be posting them all as pdfs on New Maps soon!) but here’s one that achieves a particularly strong sense of layout, in addition to the great use of color and legend design, as well as a pretty unique way of collecting data.

KellyJacksonBenMills_Final

Final Project by Kelly Jackson and Ben Mills

 

These are done in ArcMap, not particularly known for its design aesthetic, and so are doubly impressive. On the left you can see Lexington by building age, with the bluer outer suburbs of more recent vintage. In the inset, the students have used a data source called iTreeCanopy, which allows you to crowdsource the presence or absence of tree cover (canopy) from a digital image of an area. Like the Amazon Mechanical Turk, you are presented with an image and decide for each click you make on it whether there is a tree there or not. Do this enough times, and you make a canopy layer of sorts. The end result is a comma separated file containing lat-long points and a tree designation. (For more on the methodology, see here.)

Why not just use a canopy layer for Lexington you ask? Because the available data are nearly two decades old (1998). It would be bad manners to complain about free open data without doing something about it, so hopefully in the long run we can establish partnerships with local data providers where we’ve improved the data layer, at least locally.

Papers from “Where’s the Value? Emerging Digital Economies of Geolocation” session

The written texts from the AAG panel session I co-organized with Agnieszka Lesczczynski entitled “Where’s the Value? Emerging Digital Economies of Geolocation” are now available. The panelists were Elvin Wyly (UBC), Rob Kitchin (National University of Ireland at Maynooth), Agnieszka Leszczynksi (University of Birmingham) and Julie Cupples (University of Edinburgh).

Two were posted to blogs (linked below) and two are reproduced below. Although I posted links to a couple of these previously, this blog entry collects them all. (Two panelists, Sam Kinsley and David Murakami Wood, were regrettably unable to attend.)

Thanks again to all!

~ ~ ~

Elvin Wyly: “Capitalizing the Records of Life” (see below)

Rob Kitchin “Towards geographies of and produced by data brokers

Agnieszka Leszczynski “What makes location valuable? Geolocation as evidence, meaning, & identity” (see below)

Julie Cupples “Coloniality, masculinity and big data economies

And here again is the audio from the session.

~ ~ ~

“Capitalizing the Records of Life”
Elvin Wyly, UBC

Let me begin with a confession.  I did some homework reading the cv’s of my colleagues on this panel, and this is where I found the answer to our central question, “Where’s the value in the emerging digital economies of geolocation?”  I.  Am.  In.  Awe.  It’s here, right here, right now, in the intersecting life-paths of extraordinary human geographers coming together to share the results of labor, creativity, critical insight and commitment.  Julie Cupples’ work on decolonizing education and geographies of media convergence intersects with Agnieszka Lesczynski’s inquiry into the gendered dimensions of the erosion of locational privacy and the “new digital spatial mediations of everyday life,” and Sam Kinsley’s ‘Contagion’ project on the movement of ideas through technologically mediated assemblages of people, devices, and algorithms.  David Murakami Wood’s Smart Cities project and editorial assemblage in the journal Surveillance and Society respond directly to the challenges and opportunities in Rob Kitchin’s (2014) call in The Data Revolution for “a more critical and philosophical framing” of the ontology, epistemology, ideology, and methodology” of the “assemblage surrounding” the production and deployment of geolocational data.  And many of these connections have been the subject of wise anticipatory reflections on Jeremy Crampton’s Open Geography, where the adjective and the verb of ‘open’ in the New Mappings Collaborative give us a dynamic critical cartography of the overwhelming political and knowledge economies of spatialized information.

As I read the cv’s of my panelists, it became obvious that the value of a record of a life — that’s the Latin curriculum vitae — is the new frontier of what James Blaut (1993) once called The Colonizer’s Model of the World, and what Kinsley (2014) has diagnosed as “the industrial retention of collective life.”  Smart cities, the social graph, the Internet of Things, the Quantified Self, the Zettabyte (270 bytes) Age analyzed by Kitchin:  all of this signifies a new quantitative revolution defined by the paradox of life in the age of post-humanist human geography.  In the closing lines of Explanation in Geography, David Harvey announced “by our models they shall know us” — a new generation of human geographers bearing the models and data of modern science; today, it’s the algorithms, models, and corporations that arrive bearing humans — millions and billions of them — whose curricula vitae can be measured, mapped, and monetized at scales that are simulteneously perrsonalized and planetary.  Facebook alone curates more than 64 thousand years of human social relations every day (four-fifths of it on mobile devices and four-fifths of it outside the U.S. and Canada) and LinkedIn CEO Jeffrey Weiner (quoted in MarketWatch, 2015) recently declared, “We want to digitally map the global economy, identifying the connections between people, companies, jobs, skills, higher educational organizations and professional knowledge and allow all forms of capital, intellectual captial, financial capital, and human capital to flow to where [they] can best be leveraged.”

Capitalized curricula vitae, however, are automating and accelerating what Anne Buttimer once called the ‘dance macabre’ of the knowledge economies of spatialized information, because the deceptively friendly concept of ‘human capital’ is in fact a deadly contradiction:  capital is dead labor, the accumulated financial and technological appropriation of surplus value created through human labor, human creativity, and human thought.  Buttimer’s remark about geospatial information being “a chilly recording by a detached observer, a hollow rattle of bones” hurt — because this is what she said in a conversation to the legendary time-geographer Torsten Hägerstrand, who in the 1940s spent years with his wife Britt in the church-register archives of a rural Swedish parish to understand “a human population in its time and space context.  Here’s what Hägerstrand (2006, p. xi) recalls:

 “[We] worked out the individual biographies of all the many thousands of individuals who had lived in the area over the last hundred years.  We followed them all from year to year, from home to home, and from position to position.  As the data accumulated, we watched the drama of life unfold before our eyes with graphic clarity.  It was something of stark poetry to see the people who lived around us, many of whom we knew, as the tips of stems, endlessly twisting themselves down in the realm of times past.”

Hägerstrand wrote that he was disturbed and alarmed by Buttimer’s words, and I am too, because Allan Pred (2005, p. 328) began his obituary for Hägerstrand by quoting Walter Benjamin, emphasizing that it is not only knowledge or wisdom, but above all real life — “the stuff stories are made of” — which “first assumes transmissible form at the moment of …death.”  But just as “every text has a life history” (Pred, 2005, p. 331) that comes to an end, now Allan Pred’s curriculum vitae has also assumed transmissible form of the market-driven, distorted sort you can track through the evolving Hägerstrandian time-space prisms of the digitized network society.  Hägerstrand is dead, but he has a Google Scholar profile that’s constantly updated by the search robots, and the valorized geolocatable knowledge of his citations put him in a dance macabre of apocalyptic quantification:  he is “worth” only 1.093 percent of the valorization of another dead curriculum vitae, that of Foucault, who’s also on Google Scholar.  The world is falling in love with geography, but we don’t need more than just a few human geographers to do geography, thanks to the self-replicating algorithms and bots of the corporate cloud of cognitive capital.

The geolocatable knowledge economy is thus a bundle of contradictions and the endgame of the organic composition of human capital.  Human researchers spending years in the archives to build databases are now put into competition with the fractal second derivatives of code:  how do I balance my respect and reverence for our new generation of geographers screen-scraping APIs and coding in R, D3, Python, and Ruby on Rails without giving up what we have learned from the slow, patient, embodied labor of previous generations working by hand?  I see the tips of stems, not just in Hägerstrand’s small Swedish parish, but right here, in this room.  Tips of stems, endlessly twisting down in the realm of times past — but in today’s times where each flower now faces unprecedented competition in every domain:  jobs, research support, academic freedom, human care, human recognition, human attention.  Tips of stems, endlessly twisting through time-spaces of a present suffused with astronomical volumes of geographical data in what the historian George Dyson (2012) calls the “universe of self-replicating code.”  Tips of stems, tracing out an entirely new ontology of socio-spatial sampling theory defined by the automated mashup analytics that now combine Hägerstrand’s time-space diagrams with Heisenberg’s observational uncertainties, Alan Turing’s (1950) ‘universal machine,’ and Foucault’s archaeology of knowledge blended with Marx’s conception of the “general intellect” and Auguste Comte’s notion of the ‘Great Being’ of all the accumulated knowledge of intergenerational human knowledge, tradition, and custom.  Tips of stems, tracing lifeworlds of a situationist social physics that treats smartphones as “brain extenders” (Kurzweil, 2014) converging into a planetary “hive mind” (Shirky, 2008) while reconfiguring the observational infrastructures and human labor relations of an empiricist hijacking of positivism:  if Chris Anderson (2008) is correct that the petabyte age of data renders the scientific method obsolete, then who needs theory?

We all need theory — we humans.  Theory is the intergenerational inheritance of human inquiry, human thought, and human struggle.  Let me be clear:  I mean no disrespect to the extraordinary achievements of the new generation of data revolutionaries represented by my distinguished panelists, and all of you who can code circles around my pathetic do-loop rusty routines in FORTRAN, Cobol, and SAS.  Tips of stems, twisting themselves down into the realms of human history:  take a look around, at one of the last generations of human geospatial analysts, before we’re all replaced by algorithmic aggregation.  Yesterday’s revolution was humans doing quantification.  Today’s revolution is quantification doing humans.

References

Anderson, Chris (2008).  “The End of Theory:  The Data Deluge Makes the Scientific Method Obsolete.”  Wired, June 23.

Blaut, James (1993).  The Colonizer’s Model of the World.  New York:  Guilford Press.

Dyson, George (2012).  “A Universe of Self-Replicating Code.”  Edge, March 26, at http://edge.org

Hägerstrand, Torsten (2006).  “Foreword.”  In Anne Buttimer and Tom Mels, By Northern Lights:  On the Making of Geography in Sweden.  Aldershot:  Ashgate, xi-xiv.

Kitchin, Rob (2014).  The Data Revolution:  Big Data, Open Data, Data Infrastructures and Their Consequences.  London:  Sage Publications.

Kinsley, Sam (2014).  “Memory Programmes:  The Industrial Retention of Collective Life.”  Cultural Geographies, October.

Kurzweil, Ray (2014).  Comments at ‘Will Innovation Save Us?’ with Richard Florida and Ray Kurzweil.  Vancouver:  Simon Fraser University Public Square, October.

MarketWatch (2015).  “LinkedIn Wants to Map the Global Economy.”  MarketWatch, April 9.

Pred, Allan (2005).  “Hägerstrand Matters:  Life(-path) and Death Matters — Some Touching Remarks.”  Progress in Human Geography 29(3), 328-332.

Shirky, Clay (2008).  Here Comes Everybody:  The Power of Organizing Without Organizations.  New York:  Penguin.

Turing, Alan M. (1950).  “Computing Machinery and Intelligence.”  Mind 59(236), 433-460.

 

~ ~ ~

“What makes location valuable? Geolocation as evidence, meaning, & identity”
Agnieszka Lesczcynski, University of Birmingham

I want to invert the question that Jeremy and myself posed to the panel when organizing this session by asking, rather than ‘where is the value’ in geolocation, what is it that makes geolocation valuable? In contending that there are particular kinds of economies emerging around location, it is because geolocation itself is somehow intrinsically valuable, and I’d like to make some preliminary propositions to this end.

Over the last few years I have been particularly interested in the ways in which emergent surveillance practices of the securities agencies, made broadly known to us through the as yet still-unfolding Snowden revelations, are crystallizing around big data – its collection, mining, interception, aggregation, and analytics. And specifically, I’m particularly interested in the ways in which locational data is figuring as central within these emergent regimes of dataveillance. Indeed at the close of 2013, Barton Gellman and Ashkan Soltani, reporting in the Washington Post, identified at least ten American signals intelligence programmes or SIGADs that explicitly sweep up locational data – i.e., where location data is the target or object of data capture, interception, and aggregation.

  • Under a SIGAD designated HAPPYFOOT, the NSA taps directly into mobile app data traffic that streams smartphone locations to location-based advertising networks organized around the delivery of proximately relevant mobile ads, often unencrypted and in the clear. This locational data, which is often determined through mobile device GPS capabilities, is far higher-resolution than network location, allowing the NSA “to map Internet addresses to physical locations more precisely than is possible with traditional Internet geolocation services”
  • Documents dating from 2010 reveal that the NSA and GCHQ exploit weaknesses in ‘leaky’ mobile social and gaming applications that veil secondary data mining operations behind primary interfaces, piggybacking off of commercial data collection by syphoning up personal information including location under a signals intelligence program code-named ‘TRACKER SMURF’ after the children’s animated classic
  • In perhaps the most widely publicized example, the NSA collects over 5 billion cell phone location registers off of cell towers worldwide, bulk processing this location data through an analytics suite code-named CO-TRAVELLER which looks to identify new targets for surveillance on the basis of parallel movement with existing targets of surveillance – i.e., individuals whose cell phones ping off of the same cell towers in the same succession at the same time as individuals already under surveillance.
  • Just a few months ago, it was leaked that the CSE, or Canada’s version of the NSA, was tracking domestic as well as foreign travelers via Wi-Fi at major Canadian airports for up to two weeks as they transited through the airports and subsequently through other ‘nodes’ including other domestic and international airports, urban Wi-Fi hotspots, transport hubs, major hotels and conference centers, and even public libraries both within Canada and beyond in a pilot project for the NSA;
  • and, most recently, under a SIGAD code-named LEVITATION, the CSE has been demonstrated to be intercepting data cable traffic to monitor up to 15 million file downloads a day. Particularly significant in the leaked CSE document detailing this programme is that the CSE explicitly states that it is looking to location data to improve LEVITATION capabilities for intercepting both GPS waypoints and “[d]evices close to places” so as to further isolate and develop surveillance targets, including those carrying and using devices within proximity of designated locations.

So the question is, why geolocation? Why is it of such great interest to the securities agencies? And here I want to argue that it is of interest because it is inherently valuable, and uniquely so amongst other forms of PII. And this value is latent in the spatio-temporal and spatial-relational nature of geolocation data.

  • the spatio-temporal nature of many spatial big data productions means that it may be enrolled as definitive evidence of our complicity or involvement in particular kinds of socially disruptive events or emergencies by virtue of our presence, or as in the case of CO-TRAVELER, co-presence and co-movement, in particular spaces at particular times
  • furthermore, longitudinal retention of highly precise, time-stamped geoloational data traces allow for the reconstruction of detailed individual spatial histories, which like the CO-TRAVELER example, similarly participate within what Kate Crawford has recently characterized as emergent truth economies of big data in which data is truth;
  • the relational nature of spatial big data productions, in which our data may be used to discern our religious, ethnic, political and other kinds of personal affiliations and identifies on the basis of the kinds of places that we visit and the ability to establish linkages with other PII across data flows;
  • and, in this vein, the ways in which locations are inherently meaningful – for example, they may be as revealing of highly sensitive information about ourselves as our DNA. For instance, on the basis of the specialty of a medical office that we visit, this information may be revelatory of the fact that we may have a degenerative genetic disease and the nature of that disease – information that socially we otherwise understand as some of the most private information about ourselves.
  • and, of course, the ways in which location is not only revealing of identity positions, but it is identity – for example, a group of researchers determined that unique individuals could be identified form the spatial metadata of only four cell phone calls at a very high confidence level.

So in asking where is the value in geolocation, my take is that it is valuable – to both the intelligence apparatuses that I have highlighted here but also corporate entities – because it is uniquely sensitive – revealing and identifying – amongst other forms of PII.

~ ~ ~

Our algorithmic society: robots!

I’ve been thinking a lot about robots lately. The most immediate reason is that I’m on a panel at a forthcoming conference (the AAG in Chicago). No doubt a somewhat odd topic, at first glance. (I was mentioning this panel to a colleague and he remarked that geography certainly seemed to be a discipline of many interests!).

Anyhow, this isn’t the only reason, because robots are linked with any number of interesting developments and questions, so you can’t or shouldn’t just consider them by themselves. “Robots” can be a synecdoche for automation, computation, wider issues of the changing economy, and so on. Recently Sue Halpern suggested that algorithms are virtual robots.

I’m not sure what direction my fellow panelists will take; perhaps they’ll consider the social implications of interacting with robots, or the role robots can play in our personal everyday lives; for example, how they might substitute for human contact (fembots and love dolls) and if that’s tied to eg falling fertility rates, as my colleague Heidi Nast has suggested. Some of this harkens back to the origins of the term, which you may know about. This was the 1921 play R.U.R. by the Czech writer, Karel Čapek. His brother Josef suggested the word robot, meaning “serf labor” or corvee. There is a long tradition of robots taking over, from Terminator, to (perhaps?) Frankenstein’s monster and the golem. For the pedantic, yes some of these are androids or artificial lifeforms rather than robots, but I don’t think that changes the point.

I think that’s all very interesting but not quite where I want to go. I keep coming back to that thought linking algorithms and robots by Sue Halpern. It’s true of course that algorithms do automated and repetitive labor. It’s true also that algorithms can learn. I have one device like that in my bedroom. Her name is Alexa, and you can buy her from Amazon (I wrote about it here). Alexa learns your voice and improves her response accuracy and relevance. Amazon’s own website uses this all the time. Think of the “if you liked this, then you’ll love this!” recommendations there–some of them leveraging human social input captured through likes, dislikes and “I found this comment useful” clicks.

The issue of interest at the moment is if robots are reshaping society–particularly economic relations. To be clear, whether robots (and if linked or even synonymous with algorithms, then also Big Data and the Internet of Things) will benefit us–and benefit equally.

There’s a well-known graph that describes the increasing divergence between productivity and wages. Here’s the version given in David Harvey’s book a Brief History of Neoliberalism (pdf here):

ScreenClip
Fig. 1.6, p. 25 of Harvey

This wage-productivity gap that has opened up since say 1980 or so gets explained through two related but not identical arguments. The first is that robots did it (robots here being technological increases in automation and productivity). There’s lots of evidence that technology has increased productivity, and of course put people out of jobs; which can be measured by labor’s (as opposed to say capital owners’) share of GDP.

For a long time (the argument goes) it was accepted that a rising tide raises all boats, and the rising tide of increased productivity increased overall human well being. But now that technology is becoming more complex, the benefits of robots so to speak are going to those who are “technology-biased” in terms of social capital (well educated, can code, are in jobs that resist automation, etc).

Beyond this, there is an additional development; the benefits are going to a very small group and not just the capital owners. This is often described as winner-takes-all (or most). It’s no longer sufficient to have a good education, be in a white collar job, etc. You need more than this; for example bargaining power as a CEO. But even that might not be enough if capital costs are driven down by robotization.

This is the argument put forward by Erik Brynjolfsson and Andrew McAfee in their book The Second Machine Age. They observe that the share of income going to labor has declined, while profits are up, but that there’s a further gap between “superstars in a field and everyone else” (p. 146). They label this gap “talent-biased technical change.” They note that between 2002 and 2007 the top 1 percent got as much s two-thirds of the profits from the growth in the economy. Looking around, in many industries–including publishing, not usually known for creating extreme wealth–there are superstars pulling away from everyone. This is often due to digital technologies.

Or take the ratio of CEO pay to the average worker, which has increased from 70 in 1990 to more than 300 hundred in 2005–again, according to them, because of information technology. Specifically, such technology allows for more direct, widespread knowledge and control over decision-making. CEOs are more in touch–and more responsible–with the workplace rather than working through chains of assistants. (To be clear, these are not new arguments and Brynjolfsson and McAfee do not present them as such, pointing to work on this from the early 1980s onwards, especially the work of Sherwin Rosen. Inequality per se has been studied for decades, eg Thomas Piketty and Emmanuel Saez’s influential 2003 paper used tax data to look at income inequality [Harvey cites it, among others].) Robert Reich calls this “share-the-scraps” economy rather than the sharing economy. Without redistribution of wealth, there’ll be no one able to afford to buy the “iEverything.”

But Brynjolfsson and McAfee do try to relate all this to technology, robots and automation. Although they are largely technological optimists, believing that benefits take take to come through with innovation, they do deal with “spread” of wealth being unequal. So it’s not that “robots are going to eat all the jobs” to use the common descriptor. Marc Andreesen (inventor of the Mosaic web browser) makes the case clearly against this by arguing that this technology puts the means of production in everyone’s hands:

What that means is that everyone gets access to unlimited information, communication, and education. At the same time, everyone has access to markets, and everyone has the tools to participate in the global market economy.

What the consequences of that will be is less clear, as is the question of whether “everyone” gets equal access to this information tech (presumably Andreesen is wildly in favor of net neutrality [ans: he is and he isn’t!]). But even if they do, if wages are so low you can’t be a consumer, or if the Internet does not unleash vast creativity but rather vast numbers of lolcats and people eating chips on their couch if they don’t have to work… Of course the producer-consumer is intriguing all the same and explains a lot of our work in New Maps for example, but it is not unproblematic.

All of this is very interesting. But what’s the other side if robots aren’t going to eat all the jobs? Well, we’ve already met it in the work of David Harvey and authors such as Jacob Hacker and Paul Pierson, who argued in their book Winner Takes All Politics that it was politics that ate all the jobs. Or, for them, US politics. Or for Harvey, neoliberalism as a deliberate political project. Hacker and Pierson go through some of the same inequalities but hardly mention technology, except to dismiss it as the cause. What they refer to as skill-biased technological change (SBTC) supposedly has led to wealth concentrations among those who are well educated and technologically adapted. Except, they argue, it hasn’t. The people at the very top are no better educated than those they’re (rapidly) pulling away from.

Second, they argue that if SBTC occurred in America it should occur elsewhere too, especially where places are more connected to the Internet or have tech transitions. But they argue, against some of the people cited above, that it hasn’t happened elsewhere:

there is more inequality among workers with the same level of skills (measured by age, education, and literacy) in the United States than there is among all workers in some of the more equal rich nations

They provide the following graph to illustrate how different is the US case:

Instead of robots then, Hacker and Pierson place the blame on American politics.

For Harvey, as you probably know, neoliberalism is an explicit political project aimed at restoring and ensuring the wealth of the few at the expense of the many. He makes some remarks about technology in Brief History, but devotes rather more to it in his book Seventeen Contradictions and the End of Capitalism (which will be an author meets the critics session at AAG with my colleague Sue Roberts). Harvey’s “contradiction” regarding technology has already been mentioned, namely that technology gives capital controls over labor. “Robots do not (except in science fiction accounts) complain, answer back, sue, get sick, go slow…” Yet the replacement of social labor with robots “makes no sense, either politically or economically” (p. 104). This is his contradiction.

This is a contradiction for the same reason Reich points to: there’ll be nobody to buy all these new products which result from the productivity. It will have “catastrophic effects” says Harvey. And robots won’t only eat entry-level jobs but also high-paying skilled jobs (he cites university professors).

Finally, let me say a few words about drones or unmanned aerial systems since I suspect that might be why I’m on the panel. Sue and I are putting the finishing touches to a long-ish paper on commercial and civil drones, and one of the points we touch on is that drones may represent the kind of technology that is usually seen as especially impactful/disruptive since it is general and applicable in many different sectors. Harvey discusses this. The estimates for the market size are truly staggering: one market research firm, the Teal Group, has estimated it to be worth some $90B in the next ten years, globally. This will eclipse the amount spent on military drones. But drones are not being deployed in the US early and often: Sue uses the term the “post-permissive” age to describe how the skies are no longer easy to fly in as they were over Iraq and Afghanistan, but are contested spaces. In the commercial sector too we are seeing a contested and struggled over environment, what with regulation, competition, and the need to create a market for UASs where one does not exist (what are they for? How can I make money?). On top of that is public distrust, the surveillant capabilities that are being resisted, and so on.

Drones may therefore be a fairly unusual kind of robot due to this general application nature. We’ll definitely have to see where they develop and under what conditions (and with what geo technologies) but as they get smaller and more autonomous they may be pretty significant.

New paper: “Collect it all”

I’ve posted the final manuscript draft of a new paper at SSRN: “Collect it all: National Security, Big data and Governance.”

Here’s the abstract.

This paper is a case study of complications of Big Data. The case study draws from the US intelligence community (IC), but the issues are applicable on a wide scale to Big Data. There are two ways Big Data are making a big impact: a reconceptualization of (geo)privacy, and “algorithmic security.” Geoprivacy is revealed as a geopolitical assemblage rather than something possessed and is part of emerging political economy of technology and neoliberal markets. Security has become increasingly algorithmic and biometric, enrolling Big Data to disambiguate the biopolitical subject. Geoweb and remote sensing technologies, companies, and knowledges are imbricated in this assemblage of algorithmic security. I conclude with three spaces of intervention; new critical histories of the geoweb that trace the relationship of geography and the state; a fuller political economy of the geoweb and its circulations of geographical knowledge; and legislative and encryption efforts that enable the geographic community to participate in public debate.

Keywords: Big Data, privacy, national security, geoweb, political economy

What is neoliberalism?

robinjames (@doctaj) on neoliberalism:

I want to hone in on one tiny aspect of neoliberalism’s epistemology. As Foucault explains in Birth of Biopolitics, “the essential epistemological transformation of these neoliberal analyses is their claim to change what constituted in fact the object, or domain of objects, the general field of reference of economic analysis” (222). This “field of reference” is whatever phenomena we observe to measure and model “the market.” Instead of analyzing the means of production, making them the object of economic analysis, neoliberalism analyzes the choices capitalists make: “it adopts the task of analyzing a form of human behavior and the internal rationality of this human behavior” (223; emphasis mine). (The important missing assumption here is that for neoliberals, we’re all capitalists, entrepreneurs of ourself, owners of the human capital that resides in our bodies, our social status, etc.) [3] Economic analysis, neoliberalism’s epistemontological foundation, is the attribution of a logos, a logic, a rationality to “free choice.”

I particularly like the way she enrolls Big Data and the algorithmic in her understanding of neoliberalism:

Just as a market can be modeled mathematically, according to various statistical and computational methods, everyone’s behavior can be modeled according to its “internal rationality.” This presumes, of course, that all (freely chosen) behavior, even the most superficially irrational behavior, has a deeper, inner logic. According to neoliberal epistemontology, all genuinely free human behavior “reacts to reality in a non-random way” and “responds systematically to modifications in the variables of the environment” (Foucault, sumarizing Becker, 269; emphasis mine).

This approach ties to what others have been saying for a number of years now on the algorithmic (I’m thinking of the work of Louise Amoore on data derivatives, among others) and the calculative (eg., Stuart Elden’s readings of Foucault and Heidegger). I’ve just completed a paper on Big Data and the intelligence community which tries to make some of these points, and Agnieszka Leszczynski and I have a cfp out for the Chicago meetings next year which we certainly hope will include these issues.

(Via this excellent piece on NewApps)

Foucault and Big Data

Very interesting comments on Foucault and Big Data by Frédéric Gros, who is one of the editors of Foucault’s Collège de France lectures:

Foucault’s great studies of disciplinary society are useful above all because they allow us to delineate, through contrast and comparison, the digital governmentality that subjects us to new forms of control, which are less vertical, more democratic and, above all, no longer burdened by any anthropological ballast. Homo digitalis today participates in, is the primary agent of, the surveillance of himself. Digital society is becoming a form of mutualised control. We should today consider the treatment of ‘big data’ working with Foucault, basing ourselves on him, but seeing further than he could. Because we have gone well beyond the disciplinary age. Security’s new concepts are no longer imprisoning individuals and normative consciousness, but rather traceability and algorithmic profiling.

More here. (Via Stuart Elden.)