Adrian Chen

You are currently browsing articles tagged Adrian Chen.

In January, I wrote this:

More than two centuries before Deep Blue deep-sixed humanity by administering a whooping to Garry Kasparov, that Baku-born John Henry, the Mechanical Turk purported to be a chess-playing automaton nonpareil. It was, of course, a fake, a contraption that hid within its case a genius-level human champion that controlled its every move. Such chicanery isn’t unusual for technologies nowhere near fruition, but the truth is even ones oh-so-close to the finish line often need the aid of a hidden hand.

In “The Humans Working Behind The Curtain,” a smart Harvard Business Review piece by Mary L. Gray and Siddharth Suri, the authors explain how the “paradox of automation’s last mile” manifests itself even in today’s highly algorithmic world, an arrangement by which people are hired to quietly complete a task AI can’t, and one which is unlikely to be undone by further progress. Unfortunately, most of the stealth work for humans created in this way is piecemeal, lower-paid and prone to the rapid churn of disruption.

An excerpt:

Cut to Bangalore, India, and meet Kala, a middle-aged mother of two sitting in front of her computer in the makeshift home office that she shares with her husband. Our team at Microsoft Research met Kala three months into studying the lives of people picking up temporary “on-demand” contract jobs via the web, the equivalent of piecework online. Her teenage sons do their homework in the adjoining room. She describes calling them into the room, pointing at her screen and asking: “Is this a bad word in English?” This is what the back end of AI looks like in 2016. Kala spends hours every week reviewing and labeling examples of questionable content. Sometimes she’s helping tech companies like Google, Facebook, Twitter, and Microsoft train the algorithms that will curate online content. Other times, she makes tough, quick decisions about what user-generated materials to take down or leave in place when companies receive customer complaints and flags about something they read or see online.

Whether it is Facebook’s trending topics; Amazon’s delivery of Prime orders via Alexa; or the many instant responses of bots we now receive in response to consumer activity or complaint, tasks advertised as AI-driven involve humans, working at computer screens, paid to respond to queries and requests sent to them through application programming interfaces (APIs) of crowdwork systems. The truth is, AI is as “fully-automated” as the Great and Powerful Oz was in that famous scene from the classic film, where Dorothy and friends realize that the great wizard is simply a man manically pulling levers from behind a curtain. This blend of AI and humans, who follow through when the AI falls short, isn’t going away anytime soon. Indeed, the creation of human tasks in the wake of technological advancement has been a part of automation’s history since the invention of the machine lathe.

We call this ever-moving frontier of AI’s development, the paradox of automation’s last mile: as AI makes progress, it also results in the rapid creation and destruction of temporary labor markets for new types of humans-in-the-loop tasks.•

The Harvard Business Review report was terrain previously covered from a different angle by Adrian Chen in Wired in 2014 with “The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed.” The journalist studied those stealthily doing the psychologically dangerous business of keeping the Internet “safe.” The opening:

THE CAMPUSES OF the tech industry are famous for their lavish cafeterias, cushy shuttles, and on-site laundry services. But on a muggy February afternoon, some of these companies’ most important work is being done 7,000 miles away, on the second floor of a former elementary school at the end of a row of auto mechanics’ stalls in Bacoor, a gritty Filipino town 13 miles southwest of Manila. When I climb the building’s narrow stairwell, I need to press against the wall to slide by workers heading down for a smoke break. Up one flight, a drowsy security guard staffs what passes for a front desk: a wooden table in a dark hallway overflowing with file folders.

Past the guard, in a large room packed with workers manning PCs on long tables, I meet Michael Baybayan, an enthusiastic 21-year-old with a jaunty pouf of reddish-brown hair. If the space does not resemble a typical startup’s office, the image on Baybayan’s screen does not resemble typical startup work: It appears to show a super-close-up photo of a two-pronged dildo wedged in a vagina. I say appears because I can barely begin to make sense of the image, a baseball-card-sized abstraction of flesh and translucent pink plastic, before he disappears it with a casual flick of his mouse.

Baybayan is part of a massive labor force that handles “content moderation”—the removal of offensive material—for US social-networking sites. As social media connects more people more intimately than ever before, companies have been confronted with the Grandma Problem: Now that grandparents routinely use services like Facebook to connect with their kids and grandkids, they are potentially exposed to the Internet’s panoply of jerks, racists, creeps, criminals, and bullies. They won’t continue to log on if they find their family photos sandwiched between a gruesome Russian highway accident and a hardcore porn video. Social media’s growth into a multibillion-dollar industry, and its lasting mainstream appeal, has depended in large part on companies’ ability to police the borders of their user-generated content—to ensure that Grandma never has to see images like the one Baybayan just nuked.

So companies like Facebook and Twitter rely on an army of workers employed to soak up the worst of humanity in order to protect the rest of us. And there are legions of them—a vast, invisible pool of human labor.•

Chen has now teamed with Ciarán Cassidy to revisit the harrowing topic for a 20-minute documentary called “The Moderators.” It’s a fascinating peek into a hidden corner of our increasingly computerized world–one that’s even more relevant in the wake of the Kremlin-bot Presidential election–as well as very good filmmaking. 

We watch as trainees at an Indian company that quietly “cleans” unacceptable content from social-media sites are introduced to sickening images they must scrub. That tired phrase “you can’t unsee this” gains new currency as the neophytes are bombarded by shock and gore. The movie numbers at 150,000 the workers in this sector trying to mitigate the chaos in the “largest experiment in anarchy we’ve ever had.” For these kids it’s a first job, a foot in the door even if they’re stepping inside a haunted house. You have to wonder, though, if they will ultimately be impacted in a Milgramesque sense, desensitized and disheartened, whether they initially realize it or not.

We are all like the moderators to a certain degree, despite their best efforts. Pretty much everyone who’s gone online during these early decades of the Digital Age has witnessed an endless parade of upsetting images and footage that was never available during a more centralized era. Are we also children who don’t realize what we’re becoming?

Tags: ,

You have to have a lot of faith in humanity to be an anarchist. Have you met people? They’re awful.

The collapse of Wall Street, the sway of corporations that see us as consumers rather than citizens, grave concerns about our environment and the decentralization of communication have opened a door for anarchic movements in the form of Occupy Wall Street and beyond. If only I had more faith in people, the awful, awful people.

An excerpt from an excellent interview that Gawker’s Adrian Chen conducted with anarchist, author and scholar David Graeber:


One of the major themes of your book is that the current political structure is not at all democratic. I think among the people who would read your book, that’s kind of a given. But you go further in pointing out the anti-democratic nature of the Founding Fathers.

David Graeber:

Most people think these guys had something to do with democracy, but nobody ever reads what they actually said. What they said is very explicit: They would say things like ‘We need to do something about all this democracy.’


So as an alternative, you promote the model of consensus that Occupy used to organize, through its General Assembly.

David Graeber:

Yeah. What we wanted to do was A) change the discourse and then B) create a culture of democracy in America, which really hasn’t had one. I mean direct democracy, hands on, let’s figure out how you make this system together. It’s ironic because if you go to someplace like Madagascar, everybody knows how to do that. They sit in a circle and they do a consensus process. There is a way that you can do these things, that millions and millions of people over human history have developed and it comes out pretty much the same wherever they are because there are certain logical constraints and people being what they are.

Consensus isn’t just about agreement. It’s about changing things around: You get a proposal, you work something out, people foresee problems, you do creative synthesis. At the end of it you come up with with something that everyone thinks is okay. Most people like it, and nobody hates it.


This is pretty much the opposite of what goes on in mainstream politics.

David Graeber:

Yeah, exactly. It’s like, ‘People can be reasonable, I didn’t think it was possible!’ And that’s something I’ve noticed, that authoritarian regimes, what they do is that they always come up with some way to teach people about political decision making that says people aren’t basically reasonable, so don’t try this at home. I always point out the difference between the Athenian Agora and the Roman Circus. When most Athenians gathered together in a big mass it was to do direct democracy. But here’s Rome, this authoritarian regime. When did most Romans get together in the same place? If they’re voting on anything it’s like thumbs-up or thumbs-down to kill some gladiator. And these things are all organized by the elite, right? So all the people who are really running things throw these games where they basically organize people into a giant lynch mobs. And then they say, ‘Look, see how people behave! You don’t want to have Democracy!'”

Tags: ,

From Adrian Chen’s smart Gawker interview with technology skeptic supreme Evgeny Morozov, a passage about why “solving” crime might not be such a good idea, though you may disagree if you’ve recently been mugged:

“You can see such solutionist logic that presumes the existence of problems based solely on the availability of nice and quick digital solutions in many walks of life: We have the tools to make government officials more honest and consistent, ergo hypocrisy and inconsistency are problems worth solving. Take crime. We have the means to predict crime—with ‘big data’ and smart algorithms—and prevent it from happening, ergo eliminating crime is a problem worth solving.

But is eliminating crime really a project worth pursuing? Don’t we need to be able to break laws in order to revise them? Once crimes are committed, cases reach the courts, generate debate in the media, and so forth—the very fact that crimes are allowed to happen allows us to revise the norms in question. So the inefficiency of the system—the fact that the crime rate is not zero—-is what saves us from the tyranny of conservatism and complacency that might be the outcome if we delegate crime prevention to algorithms and databases..”

Tags: ,