Author Archives: Jeremy

Is geolocational tracking legal?

These are fascinating questions which are still legally unresolved: is it legal to geolocationally track a person or is it an unconstitutional search? And if so, do you need a warrant? More precisely, is there a constitutional protection for geolocational privacy?

As I am not a lawyer I will pass you on to this lengthy blog post at SCOTUSblog. It discusses what is at the moment the key ruling by the Supreme Court on geolocational tracking, US v. Jones (2012).

In Jones the Supreme Court ruled in favor of the defendant, who had had a tracking device placed on his car by the police. The Court held that this was a search under the Fourth Amendment. Many privacy advocates at the time saw this as a victory for privacy rights. For a number of reasons however, the case did not substantially turn on geolocational privacy rights.

Subsequent to this case (it says here in Wikipedia) it has been found that you do indeed need a warrant for geolocational tracking:

“In October 2013, the Court of Appeals for the Third Circuit addressed the unanswered question of “whether warrantless use of GPS devices would be ‘reasonable — and thus lawful — under the Fourth Amendment [where] officers ha[ve] reasonable suspicion, and indeed probable cause’ to execute such searches.”[58]United States v. Katzin was the first relevant appeals court ruling in the wake of Jones to address this topic. The appeals court in Katzinheld that a warrant was indeed required to deploy GPS tracking devices, and further, that none of the narrow exceptions to the Fourth Amendment’s warrant requirement (e.g. exigent circumstances, the “automobile exception”, etc.) were applicable.”

Here are questions I think are still unresolved:

1. If you have to get a warrant for GPS tracking, do you have to get one for other forms of geolocational tracking, such as cell site data? Or can these be obtained through other legal recourses? What about other non-physical (electronic) tracking?

2. Does the warrant have to provide probable cause of an individual, or can you get bulk warrants (as revealed in the case of the FISA Court) based on other grounds? Or on “reasonable suspicion”?

3. If you briefly track someone rather than for an extended period of time (as was the case in Jones) is this constitutional (Justice Sotomayor argued it was not)? What if you were tracked briefly by a number of different methods (cell phone tracking, apps leaking locational info, emails, etc) at what point would these add up to extended tracking? I’m thinking Big Data here obviously.

4. Could locational information be considered voluntarily shared metadata? And would that make it less protected? So in the same way you are deemed to have shared your cell phone call (meta)data with a third party (the telco), have you shared your locational data? (I’ve been wondering about this since I saw the FISA Court order last June.)

5. Does technology itself reduce expectations of privacy (which may allow or justify increased government surveillance)? Alito: “New technology may provide increased convenience or security at the expense of privacy, and many people may find the tradeoff worthwhile.  And even if the public does not welcome the dimunition of privacy that new technology entails, they may eventually reconcile themselves to this development as inevitable.”

6. Conversely, as mentioned in Q.3, Sotomayor argues that awareness of being watched has a chilling effect: “Awareness that the Government may be watching chills associational and expressive freedoms.  And the Government’s unrestrained power to assemble data that reveal private aspects of identity is susceptible to abuse.” In everyday practice, who is correct–Alito or Sotomayor? Another way to ask this is to ask if Orwell or Greenwald is correct. In 1984 Orwell suggests that (at least to some extent) you get used to Big Brother watching you all the time and adapt. Greenwald, in his latest book, argued that it had a chilling effect. (Katy Crawford has gone further and argued that it produces “anxiety.”) I guess no matter what you feel about it; what are the effects of knowing/suspecting you’re under constant surveillance?

6. What is the legal situation in other countries?

Edited to add: 11th Circuit confirms need a warrant for cell tower tracking (US v. Davis).

New spatial media

Leszczyski and Elwood have just published a great new paper on “Feminist Geographies of New Spatial Media” (Canadian Geographer) in which they credit me with coining the term “new spatial media.”

L&E have generously used this term a couple of times in their work. It does beg the tricky question of why these days I don’t tend to use the term myself. Part of the reason is that it was coined in response to a very specific moment.

In December 2007 Georgia State University (GSU) issued a cfp for university-wide “Area of Focus” initiatives that would take a unit or cluster of interests “to the next level.” With the support of my department (sort of…) I and a bunch of colleagues put together a proposal for a “New Spatial Media Center” which we submitted in March 2008. We asked for about half a million dollars over 3 years to fund PhD scholarships and symposia, etc.

Here’s the way I was thinking at the time (click for larger version):

Flow Chart

For readers of my Mapping book you will recognize this as an alternative conceptualization of a more generalized figure that appears in the book as Figure 1.1 (which itself was somewhat hastily created for a talk I was giving at the UNC geography department!). (Edited to add: one of the things that prompted this post, was that this conception of new spatial media makes no space for exactly the concerns that Leszczyski and Elwood are pointing to in their article.)

In the proposal I defined new spatial media as “web-based services such as locative media, volunteered geographic information and open-source spatial tools” (p. 1). The idea was that this would make the Center distinctive and provide a ramp to national recognition.

The Center was not funded, due in part to departmental politics since we put in two proposals which diffused the impact of both. I privately thought at the time that our proposal was better but a weak Chair couldn’t or wouldn’t eliminate one of the proposals meant they canceled each other out. Incidentally, when I left GSU in 2011 the Associate Dean offered to create such a center as an entreaty to stay. Decent of him, but UKY was calling!

In Mapping (mostly written years 2006-April 2009, ie., post Google Earth) I did use NSM as a chapter title. But I did so hesitantly, noting a competing flurry of other terms, which I wasn’t prepared to discriminate between (p. 26). So we have locative media, VGI, neogeography, geospatial web etc. (However, to myself, I hoped that “geoweb” would win out, which I think it has done–for now!) As you may have noticed in my definition of NSM above it defines it only by examples, using other terms (locative media etc.) that probably only fuzz the issue. Maybe another reason we didn’t get the money!

Part of the origins of the phrase is obviously “new media.” My appropriation of the term was meant to be more specific and to highlight place and space. I had followed the rise of “hypertext” throughout the 1990s after the release to the public of web browsers in 1992-3, and the digital humanities, especially the Center for Humanities and New Media at George Mason University (where I worked in the 1990s). It seemed to me that the spatial could unite across a range of social and natural sciences and we made a lot of effort in the proposal to get public health, biologists, computer scientists and historians on board! Hey we even mentioned big data and ambient technologies! In any case it reflected my strong interests in inter-disciplinary research.

So the term has its specific context and intellectual forebears. In that sense it does not have an “origin” and it is tied in with other existing terms. So should it continue? This last point, that is tied in to other discourses is its strongest value in fact. “Geoweb” is a weird word and not likely to be known to the general reader. Most people have some idea of what new media are, and could reliably guess at new spatial media. Plus it emphasizes an active medium of the processes at work, rather than a flat or static “web.” As such it can easily encompass the things L&E discuss such as the uneven and gendered landscapes of “new geosocial technologies” (p. 2).

Finally, there’s that word “new”: obviously designed to be cutting edge and up to the moment. That’s probably why we’ve used it for the New Mapping Collaboratory! (A term alas I did not coin; as I recall Sue Roberts suggested collaboratory to me, and either Matt Z or Matt W or myself the other part, but they can correct me on that. fwiw, usually go by “New Maps” now.)

Thanks to Agnieszka and Sarah for prompting these reflections!

Cryptologic geographies

In 2011, a 29-year-old grad student at the University of Münster in Germany made some coding alterations to OpenSSL, the secure sockets layer used on half a million websites around the world, including banks, financial institutions and even Silicon Valley companies such as Yahoo, Tumblr, and Pinterest.

Unfortunately the code contained a security flaw. At one hour before midnight on New Year’s Eve 2011, a British computer consultant approved the new code and submitted it into the release stream, failing to notice the bug. The vulnerable code went into wide release in March 2012 as OpenSSL Version 1.0.1.

So began the “Heartbleed” vulnerability. For two years until it was noticed in April 2014 any attacker could exploit the vulnerability to obtain the “crown jewels” of the server itself, that is, the master key or password that would unlock all the accounts and enable access to everything coming or going from the server (even if encrypted).

According to one of the discoverers of the vulnerability, Codenomicon:

OpenSSL is the most popular open source cryptographic library and TLS (transport layer security) implementation used to encrypt traffic on the Internet. Your popular social site, your company’s site, commerce site, hobby site, site you install software from or even sites run by your government might be using vulnerable OpenSSL.

About 2/3 of the web servers around the world were running the software when it was discovered and revealed in April 2014. Canada’s internal revenue site was affected, and lead to about 900 taxpayer’s having their Social Insurance Numbers stolen. Most sites had to reissue their security certificates and have these propagate through the Internet. Users in the thousands if not millions were told to change their passwords (did you?). Bloomberg reported that the NSA had known about the vulnerability since the beginning but had not reported it. (The NSA denied this.)

Ironically, known exploits of the vulnerability only began after it was announced, raising the question of how and when such announcements are made. The most prized possession is a “zero-day” vulnerability, that is, an available vulnerability that has been worked on for zero days by security experts and is still viable. Do you announce or do you hoard the vulnerability for yourself or for resale?

The vulnerability was probably not deliberate according to those in the know. But in some larger sense that is irrelevant. Code is made by humans and so will contain mistakes. It is rational to suppose that there are other such vulnerabilities out there. What does this mean?

Geographers have been slow to research what I’m calling cryptologic geographies (crypto geographies). What I mean by this are the geographies of hacking, vulnerabilities, exploits, code fail, resilience, and cyberwarfare. An example is Stuxnet, the US/Israeli “worm” that was released to damage Iran’s nuclear capabilities. While Stuxnet targeted a particular kind of operating system, it caused “collateral damage” in other countries as well, especially India and Indonesia (pdf).

What would a geography of code vulnerabilities look like? This is not just a case of where are the computers. Some computers are more vulnerable–they are older, they don’t have good update policies, or they use code that has just been released. In the Heartbleed case, smaller tech-savvy companies were ironically more at risk than larger slow companies who wait for stable releases. (Microsoft was also not affected although for different reasons.)

Why haven’t we treated cyberwarfare with the same critical gaze and analytic resources that we’ve paid to regular warfare and military operations? What are the effects–both online and materially–of cyberwarfare, dark code, and encrypted (secret) knowledges? Who is less resilient or more susceptible to exploits? Who is doing the exploits? After the Heartbleed vulnerability was announced, researchers set up a “honeypot” to attract attacks in order to study them. Are there spaces and places of such attacks and counter-attacks? And of course, as I and my colleagues Sue Roberts and Ate Poorthuis wrote about recently in the Annals, there are a whole series of government-corporate relations, contracts, and outsourcing to take into account.

In the era of the Internet of Things (IoT) when we will have billions of connected devices around, on and perhaps inside us, what are the everyday code/spaces going to look like? We’ve already seen Chinese-manufactured baby monitors sold in the West get hacked, which allowed a stranger to access the camera and loudspeaker in a 2-year-old’s room. Live-streaming of data is rapidly becoming a thing as well. These data get turned into maps since much of it is geo-tagged (think Twitter, Foursquare, Google, Facebook). How secure are these data?

To reiterate then, what I’m identifying here are specifically cryptologic geographies, not just code/cyberspace what have you. Kitchin and Dodge’s Code/Space is the most sustained look to date of code in the everyday space. It set us on the right track. The Oxford Internet Institute (OII) and my “Floating Sheep” colleagues are certainly on the ball when it comes to geographies of the Internet. (Check out their latest Twitter-mining of George Carlin’s seven dirty words, and who is and isn’t saying them around the US!) Zook’s Geography of the Internet Industry came out in 2005.

But none of these cover what former VP Dick Cheney once called the “dark side” (used as a title for an excellent book by Jane Meyer on the war on terror). I’d like to see a more explicit political economy of this that could understand the biopolitics of this data and code (what Louise Amoore calls “data derivatives”) of secret and encrypted spaces and the exploits against them. A new condition of cyberwarfare.

There may be nothing to this, and even if there were, there are surely a variety of ways to understand it. Technological. Marxist. Political-economic. Biopolitically. I invite any reflections or comments on this!

Mapping happiness again


Sorry to repeat this but I believe the map is even cooler now. Hover over each point to see the crowd-sourced information about happiness and greenery measures (collected by students n GEO 109, spring 2014 at the University of Kentucky). Click to see the StreetView image at that location (this is grabbed automagically from Google StreetView itself!)

Finally, if you click the link in the sub-title, it will take you to a nice Streetview portfolio of Lexington, Kentucky.

How mapping was reinvented in WWII

My colleague Susan Schulten has a great article in the New Republic on how mapping was revolutionized during World War II.

Drawing primarily from the classic work of Richard Edes Harrison, whose globe-spanning maps were published in Fortune magazine, she tells the story of how Harrison came to work for Fortune and produce his legacy.

As she says, his “not quite maps” were highly striking and innovative:

The most powerful of these images anticipated the perspective of Google Earth. Here Harrison reintroduced a spherical dimension to the map, focusing on the theaters of war in a way thatfor instancerendered the central place of the Mediterranean and the topographical obstacles facing any invasion of southern Europe. 

In fact, Harrison was more deeply involved in the war effort than is generally known. During the war, Arthur Robinson was head of the Map Division of the Office of Strategic Services (OSS, the fore-runner of the CIA). Due to a lack of cartographically trained personnel, at one point he had the idea of sending a small team to New York City to pick up techniques on airbrush shading from Harrison, who was living on West 48th Street. The OSS team also visited Robert M. Chapin, the Chief Cartographer at Time magazine.

Furthermore, the Department of State contracted with the American Geographical Society (AGS) to produce a series of “hemisphere maps” in 1945, who further contracted out Harrison (at the rate of $3 an hour!). Erwin Raisz, the accomplished cartographer, was also involved in this work.**

Susan has written about Harrison previously:
Schulten, S. 1998. Richard Edes Harrison and the Challenge to American Cartography. Imago Mundi 50:174-188.

My review in Antipode of her latest book is here.

**These two paragraphs are based on my ongoing and incomplete research into the map work of the OSS.

Elden: Foucault Studies 17 now out – Foucault and Deleuze

Stuart provides notice of the latest issue of Foucault Studies:

Foucault Studies 17 now out – Foucault and Deleuze.

via Foucault Studies 17 now out – Foucault and Deleuze.

disClosure 23 now available

Last spring (2013) I had the honor and privilege to co-teach the annual Social Theory seminar, with three colleagues, Jenny Rice, Jeff Peters, and Susan Larson. The seminar is a long-standing feature of the Committee on Social Theory, which was founded at UKY in the spring of 1989 by JP Jones, Ted Schatzki, and Wolfgang Natter (see Postmodern Contentions, 1993).

The topic we proposed, “Mapping,” attracted a superb and diverse group of graduate students. Another tradition of the CST is the production of a journal, disClosure, which is totally written and produced by graduate students in CST and the seminar. There is a nice story on this year’s editors, Rachel Hoy and Christina Williams, here.

Issue 23 of disClosure on “Mapping” (2014) is now out. It contains a great selection of content, including poetry, interviews and articles from authors near (Transylvania University, Lexington) and far (Istanbul Technical University, University of Leeds). This year the journal goes solely online.

Three of our visitors to Lexington for the seminar, Neil Brenner, Swati Chattopadhyay and Derek Gregory were interviewed by graduate students, and those interviews are now available here. I would like to acknowledge and thank the students, as well as our visitors, for these interesting interviews. They go unfortunately unnamed in the title of the piece but they are: Jessa Loomis, Lindsay Shade (Neil Brenner), Sarah Soliman and Erin Newell (Swati Chattopadhyay), and Austin Crane, Sophie Strosberg and Marita Murphy (Derek Gregory).


Why We Can’t Trust the CIA to Redact the Senate Report on CIA Torture

Originally posted on UNREDACTED:

An honest broker?  Not the mysterious addition of FOUO marking.

An honest broker? Note the mysterious addition of FOUO marking.

Steve Aftergood at Secrecy News provides the latest example of why it is a bad idea to let the CIA redact the Senate Intelligence Committee’s report on CIA torture.  In a move of head-shaking censorship that Aftergood charitably describes as “a surprise,” the Office of National Intelligence (which nominally oversees the CIA) redacted an Intelligence Community Directive on Human Intelligence to withhold from the public the fact that the CIA “collects, analyzes, produces, and disseminates foreign intelligence and counterintelligence, including information obtained through clandestine means.”

This censorship is all the more “surprising” considering  twoother iterations of the same document, are freely available on the internet, fully uncensored.

The ODNI produced this FOIA denial in response to a FOIA request by Robert Sesek.  Despite being clearly marked “unclassified,” it was redacted under a b(3) statutory exemption, probably 50 U.S.C. § 403(g)…

View original 683 more words

New #cartodb possibilities (and #fulcrumapp!)

Some new mapping possibilities have been added by the geoweb mapping company CartoDB.

Both offer exciting possibilities that I’ve not seen before. Most amazing to me is the capability now to use near real-time NASA imagery as your base map, and to choose any day in the last two years. The imagery is 5-9 hours old, and you can choose daytime or night-time views.

Because of the ability to choose any particular date, you can go back to a season (for eg observing glacier extension) or event (Hurricane Sandy, late October 2012).

The imagery comes from GIBS, or Global Imagery Browse Services at NASA, and includes a selection of different products. Here’s yesterday’s “dust scores” for example (click for interactive map):



You can combine from hundreds of other layers (eg sea ice + sea surface temp + chlorophyll). (At least at NASA you can, I haven’t checked this in CartoDB.)

Second, there was an interesting tidbit showing how to dynamically pull Google Streetview into your point data. So the idea is that if you have collected data at various points, and uploaded these as a table to CartoDB, you can create a new column in the table which will pull form the Google Streetview API and provide you with a picture of that location.

Like this (click for interactive map):


Personally I think that’s cool!

There is a slight bug for me–once I get the SV imagery, I can’t add other fields, but their blog shows this is possible. I’m missing something somewhere.

Finally, just as I was writing this, Fulcrumapp announced they’ve got sharing of their maps up and running! Not only does this make sharing (ie, feeding the data into a visualization capability such as Mapbox or CartoDB) but it’s live!



NGA has plan for total “Map of the World”

John Goolgasian, NGA

According to the NGA, one of the most popular sessions at the recent GEOINT 2013* (held over from 2013) conference was one which offered a total “Map of the World:”

But what is it?

Map of the World is the foundation for intelligence integration, said NGA Director Letitia A. Long in her keynote address at the four-day event.

The clue lies in this statement:

Twelve different data views will make up Map of the World and nine of them are online now, including maritime and aeronautical.

This, along with Goolgasian’s involvement, indicates that it is probably related to, or draws from, the work of the World-Wide Human Geography Database Working Group (WWHGD). I’ve written about Goolgasian on this blog before.

The WWHGD is a government-private contractor (Booz Allen Hamilton are the provided contact points and presumably run it) group that is seeking to:

The WWHGD Working Group is designed to build voluntary partnerships around human geography data and mapping focused on the general principle of making appropriate information available at the appropriate scales to promote human security. This involves a voluntary “whole-of-governments” national and international approach to create a human geography data framework that can leverage ongoing efforts around the world to identify, capture, build, share, and disseminate the best available structured and unstructured foundation data.

Here are the data they’re looking at in these layers:

The inclusion of things like land ownership maps directly on to the arguments of Geoffrey Demerest, who was a key player in the Bowman Expeditions. You can judge for yourselves about the set of information here. Personally I think it’s way too rigid and a-historical (what about a history of foreign intervention in an area, or standards of living and well-being?).

But even beyond that it reflects a belief in the efficacy of totalizing indexes. We heard something about this at the AAG, and Brad Evans and Julian Reid have a discussion about it in their new book Resilient Life.

The article continues:

“Through a single point on the Earth, the Map of the World will present an integrated view of collection assets from across the community, mapping information for military operations, GEOINT observations, and NGA analytic products, data and models,” said Goolgasian.

Worth keeping an eye on.