Tags

A still from "The Moderators"

The Mechanical Turk was a chess-playing machine, an automaton that wowed audiences across Europe from 1770 until 1854, when it was destroyed by fire. Only trouble? It never was an automaton. Eventually it was unmasked as a mechanical illusion, an elaborate hoax. Hidden inside was the real operator, a human chessmaster.

The Turk reminds me a bit of the artificial intelligence actually being used by some so-called hi-tech businesses at the moment. You see, behind all the smoke and mirrors another trick may be going on: their artificial intelligence may be not quite as artificial as you think.

By artificial intelligence (or AI) I don’t mean how Amazon might have offered you “featured recommendations, inspired by your browsing history” the other day. That’s far too crude.

Nor do I mean the “Customers-who-bought-this-also-bought-that” moment when you’re shopping online, which is presumably based on firing a relatively short and simple set of algorithms at a huge database of previous transactions on the site. May be artificial, but hardly intelligence.

No, I’m thinking more about the AI that they supposedly use to police the content that you can or cannot post on your Facebook page. Take a notorious case last September when Facebook repeatedly banned a Pulitzer Prize-winning photograph from the Vietnam War in 1972. It is the disturbing image of a nine-year-old girl called Kim Phúc. She is naked, severely burned and fleeing a napalm attack.

For a time, quite inexplicably, Facebook kept removing posts containing this iconic 2oth-century image. The social media giant cited its “community standards” that restrict the display of nudity. In an open letter to Facebook’s Mark Zuckerberg, Norwegian newspaper Aftenposten’s CEO Espen Egil Hansen said this censorship revealed Facebook’s troubling inability to “distinguish between child pornography and famous war photographs”.

Eventually Facebook relented, after a wave of protests by ordinary users and media outlets such as Aftenposten. But was the “nudity” being detected automatically – by the same kind of image recognition software that hunts down, say, photos of women breast-feeding?

A month before the controversy over the Vietnam War photo, there were reports that Facebook would “no longer employ humans to write descriptions for items in its Trending section”.

Quartz confirmed from multiple sources that Facebook has laid off the entire editorial staff on the Trending team — 15-18 workers contracted through a third party. The Trending team will now be staffed entirely by engineers, who will work to check that topics and articles surfaced by the algorithms are newsworthy.”

And in an oft-quoted speech in Rome the same month, Zuckerberg addressed the question of Facebook’s role in the news media (bear in mind this was in the run-up to the US Presidential election). Appearing to downplay his editorial responsibilities, he said: “We are a tech company, not a media company… We build the tools, we do not produce any content.”

I don’t know about you, but from all this I got the distinct impression that all that “tools” and “tech” – all those algorithms, pattern recognition software, AI – were becoming the driving force behind such gatekeeping decisions about what we can and cannot see online, from images on social media to what news or even fake news is “surfaced” (yuck!) as the top headlines.

It was tech making the choices, or so we thought. Tech, using automated processes and AI, like magic fairy dust, mainly replacing manual interventions by human mediators and moderators (in this context we won’t count the people who formulated the rules behind the algorithms in the first place).

But what if this magic tech wasn’t magic enough, wasn’t fully up to it yet? What if humans were still behind many or even most of these millions of daily decisions about images on Facebook? What if, instead of AI and automation, it was really hands-on all the time, but the tech companies had to give the impression that it was mainly tech because tech was magic and humans weren’t?

Would the public feel more reassured by the human touch? Or less so, because human error can creep in when you have unmanageable work targets and an individual has to handle thousands of images an hour? Or would they feel cheated, like the audience or the chess challenger on discovering what’s really inside the cabinet of the Mechanical Turk?

Old lithograph of how the Mechanical Turk concealed its human chess player

For all their tech, many leading tech firms nowadays may be rather more “hi-Turk”, relying on cheap labour to do the day-to-day maintenance and moderation of their social media. Like the Mechanical Turk’s operator these people are largely hidden away inside big boxes, only this time the boxes are on the opposite side of the planet, in India or the Philippines. Vast armies of invisible workers in underdeveloped countries.

Each box is decidedly unglamorous compared with the shiny new HQs and campuses of Silicon Valley in California or Google Docks in Dublin. You won’t find any fancy games rooms and lavish staff restaurants, or “micro kitchens”, chillout zones, fitness centres, swimming pools, wellness areas, tech stops or phone booths.

As far as I know the box doesn’t have on-site services for laundry and pet minding either, or free beer or employee share option schemes, or team-building exercises doing splatoon in a skate park in the Wicklow Mountains.

In these boxes dotted across the the Far East are tens of thousands of low-paid moderators – far more workers than the combined total of staff in the Google, Twitter and Facebook offices in the West.

There’s a brief glimpse into this work in India in a superb new documentary called The Moderators. It’s relatively short (20 minutes) and atmospheric, and is by Irish film director Ciarán Cassidy and New Yorker staff writer Adrian Chen:

In an office in India, a cadre of Internet moderators ensures that social media sites are not taken over by bots, scammers, and pornographers. The Moderators shows the humans behind content moderation, taking viewers into the training process that workers go through in order to become social media’s monitors.

The documentary was inspired by Chen’s 2014 article for Wired, “The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed”. A bot couldn’t have written that headline.

In the Bangalore office all the work appears to have been outsourced. Throughout, the workers refer to the social media and dating websites that they work for as the “client”. An individual moderator might have a target of 2,000 images an hour.

Given the subject matter, the documentary is surprisingly gentle and beautifully filmed. The directors make the smart choice of concentrating on a small group of new workers each day as they are inducted during training week. Their trainers emphasise what content is to be banned and how to deal with it.

The trainees also have to come to grips with what is acceptable – applying standards that are largely alien, from a western liberal society rather than Indian or Hindu culture. To take a random example, they are told that a woman in a bikini is actually not considered to be a pornographic image.

It is, as a review in New Republic puts it, an “on-the-nose parable” about the rich north farming out their psychological trauma to the poor in the south.

So alongside – or in this case instead of – the AI and automation, tech firms use a surprisingly large army of humans behind the scenes. As one trainer says, “You can’t be dependent on automation in this thing, you can’t be. Sometimes system makes error. Humans are needed, that I’m sure… for now at least.”

It is Facebook’s modern equivalent of the Mechanical Turk – a perfect example of “fauxtomation” in action. Apparently the term was coined only yesterday by Astra Taylor on the Twitter:

//platform.twitter.com/widgets.js

It’s one more thing to think about the next time you’re massaging your new Apple iPhone from Bengaluru in India, or barking orders to Alexa on your Amazon Echo made in Hengyang in China, or while you’re on Facebook and “friending” someone you’ve never actually met (yeah, you can tell I’m one of those grumpy sods who think things began to go downhill when “friend” became a verb and “like” became a noun).

It’s a reminder that in this clean, bright Facebooky world of futuristic tech and gizmo design and mega automation, some badly paid humans have to do the grunt work down the virtual sewers, and the superintelligent AI thing may not be as smart and shiny and high-tech as you think.

Hidden labour is nothing new to global capitalism of course. We are surrounded by it, by all this low-paid, invisible labour that has been embodied and congealed in the shiny objects of our shopping malls, from snazzy electronic tablets to dirt-cheap clothing.

Sometimes, briefly, the invisible labour becomes clearly exposed to the western gaze, in documentaries such as The Moderators. Or in terrible disasters such as the one at the Rana Plaza garment factory in Bangladesh in April 2013. The eight-storey, illegally built building was a sweatshop for international companies – leading global brands such as Benetton, Carrefour, Walmart and Primark. Approximately 1,130 workers were killed and thousands more were injured when the building collapsed.

A few links

  • You can also view The Moderators online for free on the Field of Vision website.
  • And the Wikipedia entry on the extraordinary life of Kim Phúc, “the Napalm girl”, is well worth a read.
  • Ciarán Cassidy’s previous work also includes the haunting documentary The Last Days of Peter Bergmann. I can’t recommend it enough.
  • If you want more on the Mechanical Turk, check out Simon Schaffer’s wide-ranging essay “Babbage’s Dancer and the Impresarios of Mechanism” (damn fine title that) in Cultural Babbage, edited by Francis Spufford and Jenny Uglow (Faber & Faber, 1996).
Advertisements