Author Archives: Jeremy

Privacy–still alive down here!

Stephen Schulhofer on why privacy may still be alive:

  1. Privacy gets renewed prominence as a constitutional value
  2. The “nothing to hide” argument can be buried forever
  3. Legislative compromises are probably unacceptable
  4. Executive Branch safeguards fare even worse
  5. Warrantless section 702 surveillance of non-US persons abroad is now especially suspect
  6. The third party doctrine is badly shaken but still intact

Note especially the sixth point. You may remember from the Snowden files that the discussion about “metadata” turned in part on it being something the public knowingly concedes to thirds parties (ie telecoms) and that no individual warrants were required to obtain it.

This will be especially relevant to geolocational privacy and tracking (ie our locations, journeys and movements are potentially willingly given up to third parties, eg Google, which the government can then collect). Schulhofer notes the counterargument that it is “intensely personal” and thus subject to the Fourth Amendment (you may also remember the argument of “precise geolocational information”). But he warns:

At bottom, Riley shows the Court’s full appreciation of the threat posed by unrestricted government access to digital files, and this may ultimately prove to be its most important legacy. But the logic of the third-party doctrine still will have to be tackled head-on in situations where police get information directly from an intermediary like an internet service provider or a cloud computing service.

We need more of this kind of analysis from geographers–why aren’t we taking part in these legal debates? Surely we have something to say about geolocational privacy and surveillance.

Schulhofer: Pleasant Surprises – and One Disappointment – in the Supreme Court’s Cell Phone Decision

Very thorough and informative reading of the Riley decision. If you are not familiar with this ruling, it is the Supreme Court decision that searches of your phone if arrested are not permissible without a warrant. This piece gives several forceful reasons why this is a good ruling for privacy advocates (not a sentence one types too often!). Note the geolocational privacy implications this gives rise to.

Pleasant Surprises – and One Disappointment – in the Supreme Court’s Cell Phone Decision.

Rebecca Sandover: Discovering the Geo in Social Media data

Scraping and mapping Tweets using Scraperwiki:

Discovering the Geo in Social Media data.

via Rebecca Sandover: Discovering the Geo in Social Media data.

Radack on the drone assassination memo

ScreenClip

Jesselyn Radack, the lawyer who successfully defended NSA whistleblower Thomas Drake from prosecution, has an initial analysis of the memo the government released today that justified the assassination by drone of US citizen Anwar al Awlaki in Yemen. I’ve included a screenshot of the Guardian’s current front page, which shows how important this document is.

Mapping all NYC’s taxi rides

ScreenClip

Fascinating patterns are revealed by this unusual data set: all of the taxi rides captured by GPS in New York City–some 173 million trips. Mapbox has some maps and analysis.

Note also this warning: the data were “de-anonymized” fairly easily.

Is geolocational tracking legal?

These are fascinating questions which are still legally unresolved: is it legal to geolocationally track a person or is it an unconstitutional search? And if so, do you need a warrant? More precisely, is there a constitutional protection for geolocational privacy?

As I am not a lawyer I will pass you on to this lengthy blog post at SCOTUSblog. It discusses what is at the moment the key ruling by the Supreme Court on geolocational tracking, US v. Jones (2012).

In Jones the Supreme Court ruled in favor of the defendant, who had had a tracking device placed on his car by the police. The Court held that this was a search under the Fourth Amendment. Many privacy advocates at the time saw this as a victory for privacy rights. For a number of reasons however, the case did not substantially turn on geolocational privacy rights.

Subsequent to this case (it says here in Wikipedia) it has been found that you do indeed need a warrant for geolocational tracking:

“In October 2013, the Court of Appeals for the Third Circuit addressed the unanswered question of “whether warrantless use of GPS devices would be ‘reasonable — and thus lawful — under the Fourth Amendment [where] officers ha[ve] reasonable suspicion, and indeed probable cause’ to execute such searches.”[58]United States v. Katzin was the first relevant appeals court ruling in the wake of Jones to address this topic. The appeals court in Katzinheld that a warrant was indeed required to deploy GPS tracking devices, and further, that none of the narrow exceptions to the Fourth Amendment’s warrant requirement (e.g. exigent circumstances, the “automobile exception”, etc.) were applicable.”

Here are questions I think are still unresolved:

1. If you have to get a warrant for GPS tracking, do you have to get one for other forms of geolocational tracking, such as cell site data? Or can these be obtained through other legal recourses? What about other non-physical (electronic) tracking?

2. Does the warrant have to provide probable cause of an individual, or can you get bulk warrants (as revealed in the case of the FISA Court) based on other grounds? Or on “reasonable suspicion”?

3. If you briefly track someone rather than for an extended period of time (as was the case in Jones) is this constitutional (Justice Sotomayor argued it was not)? What if you were tracked briefly by a number of different methods (cell phone tracking, apps leaking locational info, emails, etc) at what point would these add up to extended tracking? I’m thinking Big Data here obviously.

4. Could locational information be considered voluntarily shared metadata? And would that make it less protected? So in the same way you are deemed to have shared your cell phone call (meta)data with a third party (the telco), have you shared your locational data? (I’ve been wondering about this since I saw the FISA Court order last June.)

5. Does technology itself reduce expectations of privacy (which may allow or justify increased government surveillance)? Alito: “New technology may provide increased convenience or security at the expense of privacy, and many people may find the tradeoff worthwhile.  And even if the public does not welcome the dimunition of privacy that new technology entails, they may eventually reconcile themselves to this development as inevitable.”

6. Conversely, as mentioned in Q.3, Sotomayor argues that awareness of being watched has a chilling effect: “Awareness that the Government may be watching chills associational and expressive freedoms.  And the Government’s unrestrained power to assemble data that reveal private aspects of identity is susceptible to abuse.” In everyday practice, who is correct–Alito or Sotomayor? Another way to ask this is to ask if Orwell or Greenwald is correct. In 1984 Orwell suggests that (at least to some extent) you get used to Big Brother watching you all the time and adapt. Greenwald, in his latest book, argued that it had a chilling effect. (Katy Crawford has gone further and argued that it produces “anxiety.”) I guess no matter what you feel about it; what are the effects of knowing/suspecting you’re under constant surveillance?

6. What is the legal situation in other countries?

Edited to add: 11th Circuit confirms need a warrant for cell tower tracking (US v. Davis).

New spatial media

Leszczyski and Elwood have just published a great new paper on “Feminist Geographies of New Spatial Media” (Canadian Geographer) in which they credit me with coining the term “new spatial media.”

L&E have generously used this term a couple of times in their work. It does beg the tricky question of why these days I don’t tend to use the term myself. Part of the reason is that it was coined in response to a very specific moment.

In December 2007 Georgia State University (GSU) issued a cfp for university-wide “Area of Focus” initiatives that would take a unit or cluster of interests “to the next level.” With the support of my department (sort of…) I and a bunch of colleagues put together a proposal for a “New Spatial Media Center” which we submitted in March 2008. We asked for about half a million dollars over 3 years to fund PhD scholarships and symposia, etc.

Here’s the way I was thinking at the time (click for larger version):

Flow Chart

For readers of my Mapping book you will recognize this as an alternative conceptualization of a more generalized figure that appears in the book as Figure 1.1 (which itself was somewhat hastily created for a talk I was giving at the UNC geography department!). (Edited to add: one of the things that prompted this post, was that this conception of new spatial media makes no space for exactly the concerns that Leszczyski and Elwood are pointing to in their article.)

In the proposal I defined new spatial media as “web-based services such as locative media, volunteered geographic information and open-source spatial tools” (p. 1). The idea was that this would make the Center distinctive and provide a ramp to national recognition.

The Center was not funded, due in part to departmental politics since we put in two proposals which diffused the impact of both. I privately thought at the time that our proposal was better but a weak Chair couldn’t or wouldn’t eliminate one of the proposals meant they canceled each other out. Incidentally, when I left GSU in 2011 the Associate Dean offered to create such a center as an entreaty to stay. Decent of him, but UKY was calling!

In Mapping (mostly written years 2006-April 2009, ie., post Google Earth) I did use NSM as a chapter title. But I did so hesitantly, noting a competing flurry of other terms, which I wasn’t prepared to discriminate between (p. 26). So we have locative media, VGI, neogeography, geospatial web etc. (However, to myself, I hoped that “geoweb” would win out, which I think it has done–for now!) As you may have noticed in my definition of NSM above it defines it only by examples, using other terms (locative media etc.) that probably only fuzz the issue. Maybe another reason we didn’t get the money!

Part of the origins of the phrase is obviously “new media.” My appropriation of the term was meant to be more specific and to highlight place and space. I had followed the rise of “hypertext” throughout the 1990s after the release to the public of web browsers in 1992-3, and the digital humanities, especially the Center for Humanities and New Media at George Mason University (where I worked in the 1990s). It seemed to me that the spatial could unite across a range of social and natural sciences and we made a lot of effort in the proposal to get public health, biologists, computer scientists and historians on board! Hey we even mentioned big data and ambient technologies! In any case it reflected my strong interests in inter-disciplinary research.

So the term has its specific context and intellectual forebears. In that sense it does not have an “origin” and it is tied in with other existing terms. So should it continue? This last point, that is tied in to other discourses is its strongest value in fact. “Geoweb” is a weird word and not likely to be known to the general reader. Most people have some idea of what new media are, and could reliably guess at new spatial media. Plus it emphasizes an active medium of the processes at work, rather than a flat or static “web.” As such it can easily encompass the things L&E discuss such as the uneven and gendered landscapes of “new geosocial technologies” (p. 2).

Finally, there’s that word “new”: obviously designed to be cutting edge and up to the moment. That’s probably why we’ve used it for the New Mapping Collaboratory! (A term alas I did not coin; as I recall Sue Roberts suggested collaboratory to me, and either Matt Z or Matt W or myself the other part, but they can correct me on that. fwiw, usually go by “New Maps” now.)

Thanks to Agnieszka and Sarah for prompting these reflections!

Cryptologic geographies

In 2011, a 29-year-old grad student at the University of Münster in Germany made some coding alterations to OpenSSL, the secure sockets layer used on half a million websites around the world, including banks, financial institutions and even Silicon Valley companies such as Yahoo, Tumblr, and Pinterest.

Unfortunately the code contained a security flaw. At one hour before midnight on New Year’s Eve 2011, a British computer consultant approved the new code and submitted it into the release stream, failing to notice the bug. The vulnerable code went into wide release in March 2012 as OpenSSL Version 1.0.1.

So began the “Heartbleed” vulnerability. For two years until it was noticed in April 2014 any attacker could exploit the vulnerability to obtain the “crown jewels” of the server itself, that is, the master key or password that would unlock all the accounts and enable access to everything coming or going from the server (even if encrypted).

According to one of the discoverers of the vulnerability, Codenomicon:

OpenSSL is the most popular open source cryptographic library and TLS (transport layer security) implementation used to encrypt traffic on the Internet. Your popular social site, your company’s site, commerce site, hobby site, site you install software from or even sites run by your government might be using vulnerable OpenSSL.

About 2/3 of the web servers around the world were running the software when it was discovered and revealed in April 2014. Canada’s internal revenue site was affected, and lead to about 900 taxpayer’s having their Social Insurance Numbers stolen. Most sites had to reissue their security certificates and have these propagate through the Internet. Users in the thousands if not millions were told to change their passwords (did you?). Bloomberg reported that the NSA had known about the vulnerability since the beginning but had not reported it. (The NSA denied this.)

Ironically, known exploits of the vulnerability only began after it was announced, raising the question of how and when such announcements are made. The most prized possession is a “zero-day” vulnerability, that is, an available vulnerability that has been worked on for zero days by security experts and is still viable. Do you announce or do you hoard the vulnerability for yourself or for resale?

The vulnerability was probably not deliberate according to those in the know. But in some larger sense that is irrelevant. Code is made by humans and so will contain mistakes. It is rational to suppose that there are other such vulnerabilities out there. What does this mean?

Geographers have been slow to research what I’m calling cryptologic geographies (crypto geographies). What I mean by this are the geographies of hacking, vulnerabilities, exploits, code fail, resilience, and cyberwarfare. An example is Stuxnet, the US/Israeli “worm” that was released to damage Iran’s nuclear capabilities. While Stuxnet targeted a particular kind of operating system, it caused “collateral damage” in other countries as well, especially India and Indonesia (pdf).

What would a geography of code vulnerabilities look like? This is not just a case of where are the computers. Some computers are more vulnerable–they are older, they don’t have good update policies, or they use code that has just been released. In the Heartbleed case, smaller tech-savvy companies were ironically more at risk than larger slow companies who wait for stable releases. (Microsoft was also not affected although for different reasons.)

Why haven’t we treated cyberwarfare with the same critical gaze and analytic resources that we’ve paid to regular warfare and military operations? What are the effects–both online and materially–of cyberwarfare, dark code, and encrypted (secret) knowledges? Who is less resilient or more susceptible to exploits? Who is doing the exploits? After the Heartbleed vulnerability was announced, researchers set up a “honeypot” to attract attacks in order to study them. Are there spaces and places of such attacks and counter-attacks? And of course, as I and my colleagues Sue Roberts and Ate Poorthuis wrote about recently in the Annals, there are a whole series of government-corporate relations, contracts, and outsourcing to take into account.

In the era of the Internet of Things (IoT) when we will have billions of connected devices around, on and perhaps inside us, what are the everyday code/spaces going to look like? We’ve already seen Chinese-manufactured baby monitors sold in the West get hacked, which allowed a stranger to access the camera and loudspeaker in a 2-year-old’s room. Live-streaming of data is rapidly becoming a thing as well. These data get turned into maps since much of it is geo-tagged (think Twitter, Foursquare, Google, Facebook). How secure are these data?

To reiterate then, what I’m identifying here are specifically cryptologic geographies, not just code/cyberspace what have you. Kitchin and Dodge’s Code/Space is the most sustained look to date of code in the everyday space. It set us on the right track. The Oxford Internet Institute (OII) and my “Floating Sheep” colleagues are certainly on the ball when it comes to geographies of the Internet. (Check out their latest Twitter-mining of George Carlin’s seven dirty words, and who is and isn’t saying them around the US!) Zook’s Geography of the Internet Industry came out in 2005.

But none of these cover what former VP Dick Cheney once called the “dark side” (used as a title for an excellent book by Jane Meyer on the war on terror). I’d like to see a more explicit political economy of this that could understand the biopolitics of this data and code (what Louise Amoore calls “data derivatives”) of secret and encrypted spaces and the exploits against them. A new condition of cyberwarfare.

There may be nothing to this, and even if there were, there are surely a variety of ways to understand it. Technological. Marxist. Political-economic. Biopolitically. I invite any reflections or comments on this!

Mapping happiness again

Capture

Sorry to repeat this but I believe the map is even cooler now. Hover over each point to see the crowd-sourced information about happiness and greenery measures (collected by students n GEO 109, spring 2014 at the University of Kentucky). Click to see the StreetView image at that location (this is grabbed automagically from Google StreetView itself!)

Finally, if you click the link in the sub-title, it will take you to a nice Streetview portfolio of Lexington, Kentucky.

http://cdb.io/1nvVhQV

How mapping was reinvented in WWII

My colleague Susan Schulten has a great article in the New Republic on how mapping was revolutionized during World War II.

Drawing primarily from the classic work of Richard Edes Harrison, whose globe-spanning maps were published in Fortune magazine, she tells the story of how Harrison came to work for Fortune and produce his legacy.

As she says, his “not quite maps” were highly striking and innovative:

The most powerful of these images anticipated the perspective of Google Earth. Here Harrison reintroduced a spherical dimension to the map, focusing on the theaters of war in a way thatfor instancerendered the central place of the Mediterranean and the topographical obstacles facing any invasion of southern Europe. 

In fact, Harrison was more deeply involved in the war effort than is generally known. During the war, Arthur Robinson was head of the Map Division of the Office of Strategic Services (OSS, the fore-runner of the CIA). Due to a lack of cartographically trained personnel, at one point he had the idea of sending a small team to New York City to pick up techniques on airbrush shading from Harrison, who was living on West 48th Street. The OSS team also visited Robert M. Chapin, the Chief Cartographer at Time magazine.

Furthermore, the Department of State contracted with the American Geographical Society (AGS) to produce a series of “hemisphere maps” in 1945, who further contracted out Harrison (at the rate of $3 an hour!). Erwin Raisz, the accomplished cartographer, was also involved in this work.**

Susan has written about Harrison previously:
Schulten, S. 1998. Richard Edes Harrison and the Challenge to American Cartography. Imago Mundi 50:174-188.

My review in Antipode of her latest book is here.

**These two paragraphs are based on my ongoing and incomplete research into the map work of the OSS.