Platform Moderation and Its Discontents

By Robert GorwaAugust 10, 2018

Platform Moderation and Its Discontents

Custodians of the Internet by Tarleton Gillespie

IT WAS THE SUMMER of 2005, and enthusiasm about the transformative energy of the collaborative “Web 2.0” was approaching its zenith. In only four years since its founding, Wikipedia was coordinating hundreds of thousands of volunteers to produce articles that a study in Nature deemed comparable in terms of quality to those found in the hallowed Encyclopedia Britannica. Millions more were posting videos onto the newly founded YouTube, blogging with tools like Blogger, interacting via Myspace, nurturing their digital avatars on Second Life, and creating a tsunami of user-generated content that Time magazine, reflecting upon their decision to name “You” as their 2006 person of the year, described as “community and collaboration on a scale never seen before.”

On June 17, 2005, the Los Angeles Times ran a little experiment. The paper published an open “wikitorial” about the Iraq War that anyone could edit on the newspaper’s website. Presumably, the editors hoped that the collaborative process would allow readers to openly discuss the highly controversial issue, reflect upon how their opinions differed from that of the editorial staff, and perhaps even reach a moderate, bi-partisan consensus. Instead, the article was immediately consumed in an edit-war, defaced repeatedly, and by the end of the weekend, plastered with a particularly infamous image of a man’s anus. It was taken down.

Reflecting upon the incident a few years later, the internet law professor James Grimmelmann explained what the Times had failed to understand. Sure, Wikipedia could theoretically be edited by anyone, but it thrived because it had robust frameworks for “moderation,” which he described as “governance mechanisms” designed to “facilitate cooperation and prevent abuse.” Grimmelmann espoused the “virtues” of content moderation — the architecture, rules, and social norms which constrain the activity of members in a community, and the mechanisms through which they are enforced — arguing convincingly that moderation practices could determine whether or not an online community became truly successful.

If moderation is governance — if it sets the rules of the road, and shapes the boundaries of socially acceptable action — then it is clearly vital to our future successes as nations and peoples. It is especially vital now when so many of us are rightly worried about the political influence of giant social media platforms, and about the health of our communities both online and offline.

That’s the key claim of Tarleton Gillespie’s excellent and timely new book, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. The author, a social media scholar based at Microsoft Research in Boston, has in the past decade established himself as a foremost chronicler and critic of the contemporary technology platform. In a 2010 article so widely read that citing it has become an inside joke at some of the academic conferences I have attended, he outlined how technology companies like Facebook, Twitter, and Google strategically branded themselves as “platforms” to frame their role as neutral providers of services with limited liability and control over the ways in which those services were used. In a series of subsequent articles, he argued that this language has had political consequences, obfuscating the fact that platforms are not neutral, and that they “intervene” constantly in online life by shaping the online experience and algorithmically determining what information to make visible or invisible to users.

Custodians follows this work by accessibly demonstrating how moderation on giant platforms — what UCLA professor Sarah Roberts helpfully terms “commercial content moderation,” as distinct from the traditional, community-based moderation described by Grimmelmann — functions, and how it is politically fraught.

Much has changed in the past decade: Facebook and YouTube have grown to reach much of the world’s connected population, becoming dominant online services with two billion and 1.8 billion users, respectively. Yes, platforms have become a home for friendly cat videos and fun family photos, but they have also become a place where people can publish hate speech, pornography, calls to violence, death threats, rampant misogyny, and terrorist propaganda — content that violates most notions of social acceptability as well the law in the countries where platform users reside.

This has necessitated a sprawling, global infrastructure for the evaluation and removal of problematic content. It’s no easy task, partially because of the stunning scale challenge: YouTube, for instance, in 2015 stated that more than 400 hours of video were being uploaded every minute. As prior review of all this content would be unfeasible if not impossible, the detection of offending material has been largely outsourced to users, who “flag” content so that it can then be reviewed (and possibly taken down) by a sprawling network of international contractors.

The first basket of critiques relates to how this process functions, even in the most clear-cut cases of egregious content. A human, somewhere (often in the Global South), must look at the flagged content to ascertain that it is indeed graphic violence, gore, or something even worse. The working conditions and psychological challenges inherent in this “dirty work” of moderation have been the subject of critiques by journalists and others, who call attention to the exploitative global supply chain that enables today’s platform capitalism.

But many cases are not so clear cut, and someone needs to define the rules underpinning moderators’ actions. These rules are deeply political, and have significant ramifications many miles from Silicon Valley. Should Kashmir separatists in India or Kurdish groups in Turkey be treated as terrorist organizations? (The Indian and Turkish governments may think so, but the international community may disagree.) Should photos of breastfeeding mothers or topless feminist activists be considered nudity?

As it stands, the content policy team for a company like Facebook needs not just to set the rules of the road, but also decide which forms of contestation are legitimate and deserve to be honored (as Gillespie writes, breastfeeding mothers eventually succeeded in having Facebook’s policies tweaked). This means it needs to effectively determine which forms are not legitimate, and even more crucially, when the sphere of public acceptability has shifted far enough that these rules should be changed to keep up with the times (for example, images of female nipples, which were previously banned and are currently permitted only in certain political, birth, and health related contexts, may feasibly be permitted soon given the massive, concerted #freethenipple campaign on Instagram and other social networks). Now repeat this for the whole world, trying to set global rules that affect hundreds, if not thousands, of various peoples and cultures.

Perhaps most incredibly, this system of private law happens in secret. Or, at least it used to: the book is subtitled “the Hidden Decisions that Shape Social Media,” reflecting the fact that commercial content moderation has traditionally been so opaque as to outrage civil society and academics. While the “Community Standards” of a company like Facebook have always stated that sexual content or the promotion of terrorism is not permitted, users have had little notion of what specifically fell into those categories, since the specific rules detailing exactly what type of content was banned from the platform were not public. The little slivers of content policy that became publicly known were largely the result of investigative reporting and leaked training documents for moderators published in outlets like the Guardian.

This changed in the few months leading up to the book’s publication, when moderation practices were finally opened up to closer scrutiny. Facebook made a significant step this April by releasing a far more comprehensive version of their Community Standards, now a 30-page document with explicit details about the rules and their many exceptions. (For example, we now know exactly what Facebook considers sexual activity, including the “stimulation of naked human nipples,” the presence of erections, and the “use of sex toys, even if above or under clothing.”) These efforts seem to be leading the way, with Google also having recently published a major transparency report that features new data about community guidelines enforcement on YouTube.

Facebook has begun holding informal roundtables for academics and journalists in North America and Europe as part of its engagement with the public around the newly published Community Standards. As well, a handful of major academic conferences have discussed moderation in depth, bringing together civil society, journalists, academics, and crucially, platform employees working on content policy.

Because of these efforts, we certainly know much more than we did last year. At a workshop I attended in Menlo Park, California, in January, representatives of Facebook’s content policy team answered questions about the new rules and provided information about aspects of their moderation practices, explaining how they perform robustness checks (a form of what academics would call “intercoder reliability,” having multiple moderators make decisions on the same piece of content); how they use automated classifiers and machine learning (one of the most important developing trends); and how Facebook’s new appeals process would be enhanced in the long run by giving the moderators making appeal decisions more context.

This newfound injection of transparency is obviously valuable and should be applauded. But, this said, a fundamental issue explored in Gillespie’s book remains under-addressed: accountability.

Users may now know more about the rules of the road, but still have little ability as individuals to affect those rules or participate meaningfully in their creation. It’s this fundamental, neoliberal lack of accountability to the public that Rebecca MacKinnon referred to when she quipped that “Facebookistan” functioned like a dictatorship in her 2012 book, Consent of the Networked. “No moderation without representation,” one could shout from the rooftops.

Gillespie discusses a number of ideas to make platforms more accountable, such as increasing their legal obligations by changing intermediary liability laws and promoting internal mechanisms for democratic engagement. He reflects upon Facebook’s brief experiment with democracy, where users could vote on policy changes that received comments from more than 30 percent of users. What if that experiment — and its inability to achieve large-scale participation — were not viewed as a failure, but rather as the first step in a longer journey? Rome wasn’t built in a day, and Mark Zuckerberg is surely no Augustus. Democratic cultures don’t just spontaneously emerge overnight.

We often talk about the sins of moderation, but what about its potential virtues? It’s noteworthy that although Gillespie’s book is about moderation, he does not discuss moderation itself as a possible future accountability mechanism.

Although the subtitle of the book includes the phrase “content moderation,” Custodians is more precisely about commercial content moderation on platforms: it spends far less time engaging with the more traditional processes of community-based content moderation, which remain the standard on Wikipedia, Reddit, and countless other forums and online communities. That type of moderation is the focus of a thriving community of researchers in fields like Human Computer Interaction, and provides important insights into the practice of community self-governance.

If done well, this type of moderation can provide some of the accountability functions that Gillespie seems to desire: it allows for norms to emerge more organically from the ground up, ameliorating many of the issues inherent in the external imposition of platform rules. Since moderators are often part of the population they oversee, becoming a moderator often requires status, deep familiarity, and a meaningful commitment to that particular population.

Different online spaces have developed different systems: in some, moderators are democratically elected; in others, moderators are bound to transparency, unable to completely remove offending content, instead striking it through with a public justification of their rationale.

Of course, these systems are not perfect, and are characterized by their own political issues and problematic power dynamics. But it’s fascinating that, despite their constant invocation of the language of “community,” most platforms never bothered to experiment with meaningful mechanisms for community-based moderation. As Kate Klonick and others have documented, moderation has always been an afterthought: it took more than five years after Facebook was founded for the company to set up its first permanent team of content moderators.

What if platforms had from the outset thought of moderation as their key commodity (as Gillespie argues) and as the key to enabling successful long-term communities (as Grimmelmann and others argue)? Might 2018 look very different? It’s apparent that while platforms have become tremendously successful businesses, they have been exponentially less successful as functional communities: as functional social networks. Platforms scaled so rapidly — they moved so fast and broke so many things — that they forgot to build the norms and governance structures that could help make their services truly transformative and beneficial; they forgot, in other words, to make them into actual communities in any meaningful sense of the word.

The question for all of us is: How do we recapture a bit of the enthusiasm of 2005 to build a better future? As Custodians thoughtfully asks, how can platforms be reimagined so that they better serve user interests and prioritize civic values, rather than corporate ones?

In the conclusion, Gillespie reveals the clever double meaning of the book’s title. Yes, moderators can in one sense be conceived as the “custodial” staff invisibly performing distasteful (yet crucial) work behind the scenes; but in another, even more pressing sense, platforms themselves are now the “custodians” entrusted with contemporary political and social life by a significant proportion of the world’s population. This guardianship brings with it enormous opportunities and responsibilities. How will we ensure that platforms live up to the challenge?

¤


Robert Gorwa researches platforms as a doctoral student in the University of Oxford’s Department of Politics and International Relations. His writing on technology and politics has appeared in Foreign Affairs, Wired UK, Quartz, the Washington Post, and other outlets.

LARB Contributor

Robert Gorwa is a researcher interested in the transnational policy challenges posed by digitized capitalism. He's a postdoctoral research fellow at the WZB Berlin Social Science Center, and a fellow at the Center for Democracy and Technology (CDT) and the Centre for International Governance Innovation (CIGI). His writing on technology and politics has appeared in Foreign AffairsWired, The Washington Post, and other popular outlets.

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!