Author Archives: Jeremy

Mary Beard on the last stages of writing a book – SPQR: A History of Ancient Rome


Looking forward to this book. Interesting she finds the maps the worst of all to do! I wonder why? I always look at the site maps provided. It raises the question of the difficulties of writing–not just the man bit of an article or book, but the whole package.

I remember in my last book I did all the permission-seeking to use illustrations. I developed an email which explained this was a low-selling book (<2,000 copies per my contract though it has gone on to sell more than 3,000 which is very nice!) and in all but one case I think they let me have the image for free. I think that most people are willing to let you have imagery, but it did mean for me keeping clearly organized files on who I'd asked, whether I was waiting for an answer, or whether permission had been granted. I kept all this in case of a copyright challenge for years. There was only one case where I felt I'd pushed it (the surrealist map of the world) which had an untraceable copyright holder (Denis Wood had previously tried to track it down and I used the credit line he'd used for it. The map dates from ~1923 so right on the cusp of copyright law anyway).

I relate to the fear of missing out an important acknowledgement!

Originally posted on Progressive Geographies:

Mary Beard has an interesting piece on the last stages of writing a book – SPQR: A History of Ancient Rome.


Ok I know you will think that you have heard this before, but the book is now within 1000 words of being completely finished. I am just tying up the epilogue, and if I get a good day at it tomorrow I may wrap it up (if I dont, then, yes, it could drag on till Thursday…after which I have no leeway … hope I am not tempting fate here).

When I say completely finished, I don’t actually mean completely, of course. I mean that the creative, staring-into-the-abyss bit has been done. Enough so that if I were to collapse and die tomorrow, it could be published in my name. What still remains are some of the lengthy, nitty gritty, frustrating and anxiety making stages. I mean things like…

View original 75 more words

Identifying value through its fakes

Fascinating article (via Schneierblog) originally published in the New Republic on how social media currency (value) is being faked. Forget spam emails etc. The way to do it is to exploit differential labor costs, exploit phone verification with mountains of SIM cards, and scrape data from dating websites.

Here’s why this is important: if you want to know what’s valuable, look at how it’s being faked, and what those fakes are selling for. So $29.99 will get 1,000 Facebook likes etc.

Furthermore, examine what the big social media companies are doing about it. On the one hand they too receive value by having more likes, users, and engagement they can sell. And remember that both Facebook and Twitter are more than 90% dependent on advertising for their revenue. If they can offer increased visibility (increased attention) they can help maintain that revenue. On the other hand, this stream of fake users, activity, clicks and so on undermines their purpose because it devalues real social engagement. Why buy an advertising campaign (or “boost” your post) if all it does it get put in front of ghost users?

You could see this as a contradiction of capitalism: “true” value being undermined by cheaper knockoff values; the necessity for cheap labor (globally cheap not locally, the workers in the story earn above average wages locally). Then you could see capitalism as accommodating (or not) such contradictions in the way it has usually done, trying to stamp out the fake product (think: champagne, Gucci handbags) while denying and obfuscating the scale of the problem. (Here’s some good research on it though, and the SEC still plays a role in terms of disclosures–more social scientists should use SEC disclosures I believe).

Or you could see it  as an arms race, with each side trying to outdo the other. This arms race might continue in perpetuity. Here’s where this becomes important. What these companies need is to distinguish a fake user (even a premium one created by hand as described in the story) from a real user. This is hard to do simply looking at the profile, admits the writer. But what’s the one huge distinguishing aspect? It’s that a fake user doesn’t move around and leave a geolocational trail. There’s no spatial media aspect to their profile (bots don’t move, and their behaviours in general are often highly coordinated for ease of management). Real people move through the environment in varying degrees.

So Facebook needs to be able to get users’ geolocational traces. There are various ways to do this; a lot of spatial profiles are available online (think: Strava, Twitter geotags). The government would also want these data for not entirely dissimilar purposes of distinguishing possible threat individuals; known as “Co-traveler analytics.” (Hey, maybe FB could get the gov data?) But spy vs. spy: the creators of fake users will start adding geolocational trails to their profiles, won’t they?

Once that happens we’ll perhaps begin a general undermining of the value of geolocational data. Privacy advocates may cheer at that. Nevertheless, it seems a great research opportunity. I’m not aware of work on fake geolocational data and economy, though there must be stuff more generally. Is value undermined by counterfeits? Or is it part of the cost of doing business? And if so, what is the cost? The article states:

Researchers estimate that the market for fake Twitter followers was worth between $40 million and $360 million in 2013, and that the market for Facebook spam was worth $87 million to $390 million. Italian Internet security researcher Andrea Stroppa has suggested that the market for fake Facebook likes could exceed even that.

So, similarly, what is the potential market for fake geolocational data? I don’t think anyone’s looked into this. And indeed, how would you research it? One way would be to estimate the proportion of fake accounts and the degree of “long tail” usage on social media. (The Oxford Internet Institute has done work on the uneven geographies of the information economy long tail aka the digital divide.) So the number of bots and the disproportionate concentration of activity that we see on Wikipedia, OSM, etc. Stroppa and Di Micheli have taken this approach. (Some of the numbers on that 2013 may be out of date already.) One finding: FB likes cost $1.07 from Facebook, but only 5c on the black market.

Anyway, some potential lines of research here. Time for an article!

How to be an academic on social media

Sam Kinsley has been compiling (academic) geography bloggers, and in a recent post asked why it is that blogs don’t take advantage of social media more often:

It was a surprise to me how quite a few of those blogs, with some honourable exceptions, are tightly focussed conduits for personal research and are not participating in wider online/offline conversations. One of the big claims made for blogging in the noughties was, of course, that ‘social’ media precisely enable broader conversations. While the majority of those active geography bloggers I found use for their blogs they do not seem to use the ‘social’  functions such as ‘reblog’ and other conversation tools on the platform.

My immediate reaction to this is as follows. First, I do occasionally use the reblog function. This works very well within the WordPress ecosystem, but have you noticed how infrequently this option comes up on blogs or news stories etc…they all have Instagram (never used it) Reddit (ditto) etc but not reblog. Where this is lacking a whole new post (gasp!) has to be created (like this one). So who do I reblog and why? Well, usually things of personal academic research interest to me. So Sam’s point would still apply.

OK then, second, and this is the biggie, I see my activities as being enabled by different platforms or social media. When I started blogging in the mid-2000s (the “noughties”) it was explicitly to develop my writing skills, personal reflections and development of research ideas. I wrote initially pseudononymously, concentrating on Foucault’s idea of “self-writing” or hypomnemata, which developed out of ideas in my 2003 book The Political Mapping of Cyberspace. Thus, yes it is true I’ve always seen the blog as a place to develop my research. Although I received very few comments, I would have loved to have had conversations, but I think the readership was too small or the platform not convenient enough.

For sharing and conversing however, which I agree with Sam is essential as part of our lives as scholars, I would turn to other venues. First of all, the best sharing site I know of is Twitter. I joined Twitter in 2009 and cannot imagine not using it. Follow the right people for the right reasons, and interesting (and research-worthy) material simply arrives! The shortness of the tweet is acceptable as long as you can link to the original piece. I do share (tweet and retweet) and I do gain (tremendously) by seeing what other people are sharing. In my recent timeline I’ve shared several books (and book reviews), a new London Underground map, updates by the Public Lab people on MapKnitter, size of the new drone market, and so on. My own original content is also published there because the blog cross-posts there, and to LinkedIn. A blog can therefore have some sharing; not by being read on the blog itself but on the social media site (Twitter). A recent post of mine for example was seen on Twitter over 3,200 times and interacted with over 130 times. (Admittedly, what those numbers mean is still a bit hazy to me.)

The other medium people often use for conversations and debate are sites such as NewApps or in politics say the Daily Kos. These are group-edited blog sites with sufficient readers to sustain conversation. The nearest one I can think of in geography is perhaps the Society and Space open website started under inveterate blogger Stuart Elden. However, the comments there usually number in the 1-3 range.

Sam says:

Surely blogging can address both of these drives: you can promote your work, but (and for me – more importantly) you can contribute to conversations and celebrate one another’s work. This is, broadly, what it can mean to participate in a community of practice as Lave & Wenger suggest (although–I don’t agree with everything in the linked piece).

I agree that there’s a not a big dichotomy between these drives of personal reflection/research and community engagement. It may be better understood as a diversity, so that one’s attention is split between a (reasonably small) number of different platforms and that there’s no single platform for everything. I personally limit myself to blogs, Twitter and Facebook (the latter for maintaining personal contacts and being aware of “events.”) I know people use Instragram, Tumblr and Pinterest but they don’t work for me.

These are just my personal preferences and I’m neither advocating they’ll work for everybody nor solve Sam’s problem. There is a lot of work on blogging and social media of course (and Sam mentions what has inspired him, there’s also work by people like dana boyd).

Anyway, there’s more to say on this but that’s my first round of thoughts. I’d welcome the continuation of the conversation…if only we could find the right platform!

Stuart Elden reviews Foucault’s Théories et institutions pénales: Cours au Collège de France 1971-1972,

Stuart’s review of the last of the courses from Collège de France to be published.

In it, he discusses two main historical themes: popular revolts in seventeenth century France, and medieval practices of inquiry and ordeal. The second theme relates to Foucault’s longstanding interest in what he called the ‘politics of truth’. From courses given in Rio de Janeiro in 1973 and Louvain in 1981, it is clear Foucault saw the medieval period as crucial to that story (a review of the second appeared in Berfrois last year). He said in Brazil that “one could write an entire history of torture, as situated between the procedure of the ordeal and inquiry”. But only now do we have the sustained study of the inquiry that those two later courses drew upon. The first theme merely receives hints elsewhere. Foucault’s example is the Nu-pieds (“bare feet”) revolts of 1639-40 in Normandy. Given that Foucault is often criticised for talking of the positive, productive side of power, but rarely examining it outside of antiquity; or of never showing how resistance takes place or is even possible, this course provides an important corrective.

Readers of Foucault may also wish to take note of this comment toward the end of the review:

I understand that there will be more volumes of lectures to come, including a course on Descartes from his time in Tunisia (currently only available in Arabic), the long-rumoured course on Nietzsche from Vincennes in the late 1960s, and possibly a 1950s course on Anthropology. Lectures originally given in English are now being translated into French, furnished with an entire critical apparatus, and then appearing again in English with the benefits of the French scholarship. A case in point are the lectures Foucault gave in Berkeley and Dartmouth in late 1980, originally edited by Mark Blasius forPolitical Theory in 1990, which appeared in French in 2013, and are forthcoming with University of Chicago Press in 2015. Other texts may yet be given the canonical treatment.

Notes toward a critical history of cartography, part 1

In the past few months I’ve agreed to develop a course called “A Critical History of Cartography” for our department’s new Masters and Certificate in Digital Mapping. This initiative, which we call New Maps Plus, will offer interested students the ability to earn a Certificate or a Masters of Science from the University of Kentucky in subjects covering digital mapping, GIS, the geoweb, and programming for online maps.

One of the things I proposed for this course was to develop a Reader in Critical Cartography, which would collect in one place, with short commentary, the people, events, maps and theory that had a profound influence on the way we think about maps, or conversely, the way maps may have made us see the world in new ways. This book would then be the assigned reading for the course but would also I hope be of interest to a wider readership.

To that end I’ve developed (with my colleague Matt Zook) the following initial schema for the book and the course. The latter is 10 weeks long so there are ten subject headings. The idea would be to pose the question of what it means to approach maps critically, with a view to looking historically to inform the present, a not uncommon technique I’ve used before.

There are a variety of ways of going about this. One would be to take maps (or people or events) that were radical at the time and recognized as such, even if only by those involved. So this would include the work of JB (Brian) Harley who wrote against the grain of cartographic received wisdom. This kind of work changed the way we understand mapping.

This could then be contrasted with what we now understand to be radical, or is often held up as radical. This could include the work of Marie Tharp (ocean floor mapping) or John Snow (cholera mapping of London). These maps “changed the world” as the Guardian puts it. But, they need our interpretive spin to recognize it as such, and thereby this falls within that category of books that are needed to understand what happened. There’s even a book with that title: the map that changed the world (about the first complete geology map).

On topic which very much should be included is the history of mapping as governmental technology. In what ways has it allowed, sped up, paved the way, and shaped how government (writ large) has pursued its projects?

Another way to understand critical mapping is to investigate those maps which might be described as “mapping at the margins.” If we understand mapping as trying to say something about the main event (as it were, the main theme) then we might also be curious about what goes on in the betweeness of spaces, those slivers, medians, gaps, edges, and over-looked places that you see as you travel the main roads and railroads. What are the things that are just off the map, are under-represented, or ignored, the places between the lines..the subjugated knowledges, as Foucault puts it in Society Must be Defended, that do not rise to the level of sufficient scientificity to be noticed. This is a philosophy of “negative mapping” as it were, where things are turned inside out and reversed. What would become possible under such a mapping? What would be learned? You might remember Places on the Margin by Rob Shields, or Bill Bunge’s extremely badly drawn maps of nuclear war; maps well outside the mainstream, yet affecting for all that. That sort of thing (or Harley’s “Silences and Secrecy”). This would be a good discussion topic I think for the MiniCrit conference here in Lexington next October because the theme is minor theory!

Additionally, such a course should cover events and places that were important in forming our present. We could include a whole history of proto-GIS here, from the Harvard Lab, to formalizing and cohering bodies of knowledge that make digital mapping possible. The latter includes important work by JK Wright, Mark Jefferson and of course Arthur Robinson. And we could also include events such as Friday Harbor and the so-called GIS Wars of the 2000s.

More recently, we’d have to include the post-GIS Wars work by feminists such as Mei-Po Kwan, and whatever phase we’re in now of “second wave” critical cartography that explores the econo-politics of representation. I’m thinking here of what new theories we might need to account for all this (Matt Wilson covered some of this territory in his “rhizomologies” Harvard seminar). This econo-politics includes people such as Monica Stephens’ work on the geoweb and VGI, Agnieszka Leszczynski’s work on big data, spatial media, value and privacy, Dan Cockayne on affective value of start-ups, Louise Amoore on critical approaches to algorithms, Rob Kitchin… and and! Suffice to say too much good work to mention!

The end of the course is planned as a return of sorts to the possibilities of doing your own mapping. The idea is simply that now that you’ve encountered all these approaches, you are in a better position to see what are the issues with collecting your own data and reflexively critiquing your own map. Citizen science; DIY mapping included. It would be an empowering moment where if you’ve read through the Reader or done the course you’re eager to take up the possibilities yourself.

I’ll try and post updates as this project progresses, though no promises on how regular these will be! I’d also love to hear from people about what might be in a Reader of Critical Cartography or on any other aspect of the above.

Image: the Fool’s Cap map. Author: unknown. Meaning: unknown.

Cartographies of the Absolute: new book

A fascinating new book by Alberto Toscano and Jeff Kinkle. From the blurb:

Can capital be seen? Cartographies of the Absolute surveys the disparate answers to this question offered by artists, film-makers, writers and theorists over the past few decades. It zones in on the crises of representation that have accompanied the enduring crisis of capitalism, foregrounding the production of new visions and artefacts that wrestle with the vastness, invisibility and complexity of the abstractions that rule our lives.

Endorsed by some interesting people, including Fredric Jameson and Trevor Paglen, which seems an indication of this book’s trajectory.

Recognize the cover image above? From Bunge’s Fitzgerald.

Are robots eating all the jobs? Perhaps not–Andrea Salvatori

We can't blame the loss of mid-level jobs purely on robots

Andrea Salvatori, University of Essex

Several developed countries including the US, UK and Germany have seen their labour markets polarised in recent decades as the number of middle-skilled jobs has declined relative to that of low and high-skilled ones. Technology has been singled out as the main culprit: computers and automation have reduced the demand for mid-level skilled workers in production lines as well as offices, increasing that for high-skilled managers, professionals and technicians. But there has been little or no impact on the demand for low-skilled service occupations.

There is a perception that the range of tasks that can be automated is rapidly expanding thanks to fast technological development. This has exacerbated concerns on the impact of technology on the quantity and quality of jobs. But does the evidence support the view that the future of the labour market is entirely in the hands of the robots?

In the simplest version of this story, as advancements in technology lead firms to demand fewer workers in mid-skill occupations, these jobs should see both employment and wages decline relative to low and high-skill jobs. This should show up in economic data as what we might call “double polarisation”: when both employment and wages grow more in high and low-skill occupations than they do in middling ones. This double polarisation was indeed what happened in the US in the 1990s, but it did not continue into the 2000s. More broadly, wage polarisation has generally not been detected in other countries that have experienced job polarisation, such as Germany and the UK.

The simple story blaming technology alone for taking mid-skilled jobs cannot explain what we see in the data. Other factors are likely to have played an important role, as I have explored in my ongoing research on the situation in the UK.

UK boom in high-skilled jobs

The UK has seen a steady decline in middling occupations since at least 1980. As the graph below shows, growth in top occupations exceeded that in bottom ones in each of the last three decades. This has resulted in a substantial shift of employment from middling to top occupations: out of 100 employees, 19 fewer could be found in middle-skill occupations in 2012 than in 1979. Of these, 16 had moved to higher-skill occupations and only three into lower-skill ones.

Job polarisation in the UK.
Author provided

This is noticeably different from the US experience, where growth at the bottom has progressively outpaced that at the top, culminating in the 2000s when employment growth was concentrated in low-skill occupations.

A distinctive change that took place in the UK since the early 1990s is the expansion in university education which led to a threefold increase in the share of graduates among employees. The expansion of graduates in this way is different to the US, which saw no comparable increase in the share of graduates over the past 20 years. This increase in the educational attainment of the UK workforce accounts for the entire growth in top-skilled occupations and a third of the decline in middling occupations.

There is no indication that wages in middling occupations have been decreasing in the UK, as one would expect if demand was declining due to the spread of automation. It is instead the performance of wages in high-skill occupations that has deteriorated over time relative to middling ones. It was the worst in the 2000s when wages in the 10% highest-paid occupations grew 10% less than those in median occupations.

During this period, the supply of graduates in the UK continued to grow at the same time as the growth of top occupations in other similarly developed countries such as the US and Canada stalled. This stalling elsewhere suggests that there may have been a wider slow down in the (technology-led) demand for high-skill occupations in the 2000s.

These facts are highly suggestive that the improvement in the education of the workforce has contributed significantly to the reallocation of employment from mid- to high-skill occupations in the UK.

Clerical wages going up

But the evidence from the UK also highlights another possible limitation of the story in which technology simply replaces mid-skilled workers. Since the 1990s, the share of mid-level, clerical jobs in the UK has indeed slowly declined, consistent with the idea that technology reduces the need for people in these occupations.

However, over the same period the wages of clerical workers have grown at a rate similar to that of professional occupations, such as lawyers and doctors, a fast-growing group whose real wages increased by about 64%. Similarly, other studies have also found that in the US, clerical occupations have seen their wages increase in spite of the decline in their relative number.

One of the early proponents of the idea that computers displace mid-skilled workers, MIT scholar David Autor, has argued that, within the same occupation, technology might replace workers in certain tasks while complementing them in others which are more cognitive and difficult to automate – or even expand the range of tasks they can perform. So, while much of the filing work once done by secretaries might now be done by computers, the remaining secretaries are supported by computers in their other tasks and perform a range of new organisational ones that were once the domain of managerial staff.

While there is no doubt that technology is a major force at play in the labour market, the differences in experiences across countries suggests other factors play an important role as well. For the UK, several pieces of evidence indicate that the expansion in university education has contributed to changing the occupational structure of the labour market.

Across countries, there is generally little evidence to support the idea that automation has been dramatically disrupting the labour market in recent times. Instead, there are clear indications that the story is likely to be a nuanced one, where the complex interaction between changes in the skills of the workforce, technology and the way different tasks are bundled into jobs means that the fate of those occupations that might appear most at risk might not be quite sealed yet.

The Conversation

Andrea Salvatori is Research Fellow, Institute for Social and Economic Research at University of Essex.

This article was originally published on The Conversation.
Read the original article.