In 2011, a 29-year-old grad student at the University of Münster in Germany made some coding alterations to OpenSSL, the secure sockets layer used on half a million websites around the world, including banks, financial institutions and even Silicon Valley companies such as Yahoo, Tumblr, and Pinterest.
Unfortunately the code contained a security flaw. At one hour before midnight on New Year’s Eve 2011, a British computer consultant approved the new code and submitted it into the release stream, failing to notice the bug. The vulnerable code went into wide release in March 2012 as OpenSSL Version 1.0.1.
So began the “Heartbleed” vulnerability. For two years until it was noticed in April 2014 any attacker could exploit the vulnerability to obtain the “crown jewels” of the server itself, that is, the master key or password that would unlock all the accounts and enable access to everything coming or going from the server (even if encrypted).
According to one of the discoverers of the vulnerability, Codenomicon:
OpenSSL is the most popular open source cryptographic library and TLS (transport layer security) implementation used to encrypt traffic on the Internet. Your popular social site, your company’s site, commerce site, hobby site, site you install software from or even sites run by your government might be using vulnerable OpenSSL.
About 2/3 of the web servers around the world were running the software when it was discovered and revealed in April 2014. Canada’s internal revenue site was affected, and lead to about 900 taxpayer’s having their Social Insurance Numbers stolen. Most sites had to reissue their security certificates and have these propagate through the Internet. Users in the thousands if not millions were told to change their passwords (did you?). Bloomberg reported that the NSA had known about the vulnerability since the beginning but had not reported it. (The NSA denied this.)
Ironically, known exploits of the vulnerability only began after it was announced, raising the question of how and when such announcements are made. The most prized possession is a “zero-day” vulnerability, that is, an available vulnerability that has been worked on for zero days by security experts and is still viable. Do you announce or do you hoard the vulnerability for yourself or for resale?
The vulnerability was probably not deliberate according to those in the know. But in some larger sense that is irrelevant. Code is made by humans and so will contain mistakes. It is rational to suppose that there are other such vulnerabilities out there. What does this mean?
Geographers have been slow to research what I’m calling cryptologic geographies (crypto geographies). What I mean by this are the geographies of hacking, vulnerabilities, exploits, code fail, resilience, and cyberwarfare. An example is Stuxnet, the US/Israeli “worm” that was released to damage Iran’s nuclear capabilities. While Stuxnet targeted a particular kind of operating system, it caused “collateral damage” in other countries as well, especially India and Indonesia (pdf).
What would a geography of code vulnerabilities look like? This is not just a case of where are the computers. Some computers are more vulnerable–they are older, they don’t have good update policies, or they use code that has just been released. In the Heartbleed case, smaller tech-savvy companies were ironically more at risk than larger slow companies who wait for stable releases. (Microsoft was also not affected although for different reasons.)
Why haven’t we treated cyberwarfare with the same critical gaze and analytic resources that we’ve paid to regular warfare and military operations? What are the effects–both online and materially–of cyberwarfare, dark code, and encrypted (secret) knowledges? Who is less resilient or more susceptible to exploits? Who is doing the exploits? After the Heartbleed vulnerability was announced, researchers set up a “honeypot” to attract attacks in order to study them. Are there spaces and places of such attacks and counter-attacks? And of course, as I and my colleagues Sue Roberts and Ate Poorthuis wrote about recently in the Annals, there are a whole series of government-corporate relations, contracts, and outsourcing to take into account.
In the era of the Internet of Things (IoT) when we will have billions of connected devices around, on and perhaps inside us, what are the everyday code/spaces going to look like? We’ve already seen Chinese-manufactured baby monitors sold in the West get hacked, which allowed a stranger to access the camera and loudspeaker in a 2-year-old’s room. Live-streaming of data is rapidly becoming a thing as well. These data get turned into maps since much of it is geo-tagged (think Twitter, Foursquare, Google, Facebook). How secure are these data?
To reiterate then, what I’m identifying here are specifically cryptologic geographies, not just code/cyberspace what have you. Kitchin and Dodge’s Code/Space is the most sustained look to date of code in the everyday space. It set us on the right track. The Oxford Internet Institute (OII) and my “Floating Sheep” colleagues are certainly on the ball when it comes to geographies of the Internet. (Check out their latest Twitter-mining of George Carlin’s seven dirty words, and who is and isn’t saying them around the US!) Zook’s Geography of the Internet Industry came out in 2005.
But none of these cover what former VP Dick Cheney once called the “dark side” (used as a title for an excellent book by Jane Meyer on the war on terror). I’d like to see a more explicit political economy of this that could understand the biopolitics of this data and code (what Louise Amoore calls “data derivatives”) of secret and encrypted spaces and the exploits against them. A new condition of cyberwarfare.
There may be nothing to this, and even if there were, there are surely a variety of ways to understand it. Technological. Marxist. Political-economic. Biopolitically. I invite any reflections or comments on this!