How Social Networks Set the Limits of What We Can Say Online

Content moderation is hard. This should be obvious, but it’s easily forgotten. It is resource intensive and relentless; it requires making difficult and often untenable distinctions; it is wholly unclear what the standards should be, especially on a global scale; and one failure can incur enough public outrage to overshadow a million quiet successes. We as a society are partly to blame for having put platforms in this untenable situation. We sometimes decry the intrusions of moderators, and sometimes decry their absence.

Even so, we have handed to private companies the power to set and enforce the boundaries of appropriate public speech. That is an enormous cultural power to be held by so few, and it is largely wielded behind closed doors, making it difficult for outsiders to inspect or challenge. Platforms frequently, and conspicuously, fail to live up to our expectations—in fact, given the enormity of the undertaking, most platforms’ own definition of success includes failing users on a regular basis.

Adapted from Custodians of the Internet by Tarleton Gillespie.

The social media companies that have profited most have done so by selling back to us the promises of the web and participatory culture. But those promises have begun to sour. While we cannot hold platforms responsible for the fact that some people want to post pornography, or mislead, or be hateful to others, we are now painfully aware of the ways in which platforms invite, facilitate, amplify, and exacerbate those tendencies.

For more than a decade, social media platforms have portrayed themselves as mere conduits, obscuring and disavowing their active role in content moderation. But the platforms are now in a new position of responsibility—not only to individual users, but to the public more broadly. As their impact on public life has become more obvious and more complicated, these companies are grappling with how best to be stewards of public culture, a responsibility that was not evident to them—or us—at the start.

For all of these reasons, we need to rethink how content moderation is done, and what we expect of it. And this begins by reforming Section 230 of the Communications Decency Act—a law that gave Silicon Valley an enormous gift, but asked for nothing in return.

The Offer of Safe Harbor

The logic of content moderation, and the robust protections offered to intermediaries by US law, made sense in the context of the early ideals of the open web, fueled by naïve optimism, a pervasive faith in technology, and entrepreneurial zeal. Ironically, these protections were wrapped up in the first wave of public concern over what the web had to offer.

The CDA, approved in 1996, was Congress’s first response to online pornography. Much of the law would be deemed unconstitutional by the Supreme Court less than a year later. But one amendment survived. Designed to shield internet service providers from liability for defamation by their users, Section 230 carved out a safe harbor for ISPs, search engines, and “interactive computer service providers:” so long as they only provided access to the internet or conveyed information, they could not be held liable for the content of that speech.

About the Author

Tarleton Gillespie is a principal researcher at Microsoft Research, an affiliated associate professor at Cornell University, and the author of Wired Shut: Copyright and the Shape of Digital Culture.

The safe harbor offered by Section 230 has two parts. The first shields intermediaries from liability for anything their users say; intermediaries that merely provide access to the internet or other network services are not considered “publishers” of their users’ content in the legal sense. Like the telephone company, intermediaries do not need to police what their users say and do. The second, less familiar part adds a twist. If an intermediary does police what its users say or do, it does not lose its safe harbor protection. In other words, choosing to delete some content does not suddenly turn the intermediary into a “publisher.” Intermediaries that choose to moderate in good faith are no more liable for moderating content than if they had simply turned a blind eye to it. These competing impulses—allowing intermediaries to stay out of the way and encouraging them to intervene—continue to shape the way we think about the role and responsibility of all internet intermediaries, including how we regulate social media.

From a policy standpoint, broad and unconditional safe harbors are advantageous for internet intermediaries. Section 230 provided ISPs and search engines with the framework on which they have depended for the past two decades: intervening on the terms they choose, while proclaiming their neutrality to avoid obligations they prefer not to meet.

We sometimes decry the intrusions of moderators, and sometimes decry their absence.

It is worth noting that Section 230 was not designed with social media platforms in mind, though platforms claim its protections. When Section 230 was being crafted, few such platforms existed. US lawmakers were regulating a web largely populated by ISPs and amateur web “publishers”—personal pages, companies with stand-alone websites, and online discussion communities. ISPs provided access to the network; the only content intermediaries at the time were “portals” like AOL and Prodigy, the earliest search engines like Altavista and Yahoo, and operators of BBS systems, chatrooms, and newsgroups. Blogging was in its infancy, well before the invention of large-scale blog-hosting services like Blogspot and WordPress. Craigslist, eBay, and Match.com were less than a year old. The ability to comment on a web page had not yet been simplified as a plug-in. The law predates not just Facebook but also MySpace, Friendster, and Livejournal. It even predates Google.

Section 230 does shield what it then awkwardly called “access software providers,” early sites that hosted content provided by users. But contemporary social media platforms profoundly exceed that description. While it might capture YouTube’s ability to host, sort, and queue up user-submitted videos, it is an ill fit for YouTube’s ContentID techniques for identifying and monetizing copyrighted material. While it may approximate some of Facebook’s more basic features, it certainly didn’t anticipate the intricacy of the News Feed algorithm.

The World Has Turned

Social media platforms are eager to retain the safe harbor protections enshrined in Section 230. But a slow reconsideration of platform responsibility is underway. Public and policy concerns around illicit content, initially focused on sexually explicit and graphically violent images, have expanded to include hate speech, self-harm, propaganda, and extremism; platforms have to deal with the enormous problem of users targeting other users, including misogynistic, racist, and homophobic attacks, trolling, harassment, and threats of violence.

In the US, growing concerns about extremist content, harassment and cyberbullying, and the distribution of nonconsensual pornography (commonly known as “revenge porn”) have tested this commitment to Section 230. Many users, particularly women and racial minorities, are so fed up with the toxic culture of harassment and abuse that they believe platforms should be obligated to intervene. In early 2016, the Obama administration urged US tech companies to develop new strategies for identifying extremist content, either to remove it or to report it to national security authorities. The controversial “Allow States and Victims To Fight Online Sex Trafficking Act” (FOSTA), signed into law in April, penalizes sites that allow advertising that facilitates sex trafficking cloaked as escort services. These calls to hold platforms liable for specific kinds of abhorrent content or behavior are undercutting the once-sturdy safe harbor principle of Section 230.

These hesitations are growing in every corner of the world, particularly around terrorism and hate speech. As ISIS and other extremist groups turn to social media to spread fear with shocking images of violence, Western governments have pressured social media companies to crack down on terrorist organizations. In 2016, European lawmakers persuaded the four largest tech companies to commit to a “code of conduct” regarding hate speech, promising to develop more rigorous review and to respond to takedown requests within 24 hours. Most recently, the European Commission delivered expanded (non-binding) guidelines requiring social media platforms to be prepared to remove terrorist and illegal content within one hour of notification.

Neither Conduit nor Content

Even in the face of longstanding and growing recognition of such problems, the logic underlying Section 230 persists. The promise made by social media platforms—of openness, neutrality, meritocracy, and community—remains powerful and seductive, resonating deeply with the ideals of network culture and a truly democratic information society. But as social media platforms multiply in form and purpose, become more central to how and where users encounter one another online, and involve themselves in the circulation not just of words and images but of goods, money, services, and labor, the safe harbor afforded them seems more and more problematic.

Social media platforms are intermediaries, in the sense that they mediate between users who speak and users who might want to hear them. This makes them similar not only to search engines and ISPs but also to traditional media and telecommunications companies. Media of all kinds face some sort of regulatory framework to oversee how they mediate between producers and audiences, speakers and listeners, the individual and the collective.

Rethinking a bedrock internet law.

Social media violate the century-old distinction embedded in how we think about media and communication. Social media platforms promise to connect users person to person, “conduits” entrusted with messages to be delivered to a select audience (one person, or a friend list, or all users who might want to find it). But as a part of their service, these platforms not only host that content, they organize it, make it searchable, and often algorithmically select some of it to deliver as front-page offerings, newsfeeds, trends, subscribed channels, or personalized recommendations. In a way, those choices are the product, meant to draw in users and keep them on the platform, paid for with attention to advertising and ever more personal data.

The moment that social media platforms added ways to tag or sort or search or categorize what users posted, personalized content, or indicated what was trending or popular or featured—the moment they did anything other than list users’ contributions in reverse chronological order—they moved from delivering content for the person posting it to packaging it for the person accessing it. This makes them distinctly neither conduit nor content, not only network nor only media, but a hybrid not anticipated by current law.

It is not surprising that users mistakenly expect them to be one or the other, and are taken aback when they find they are something altogether different. Social media platforms have been complicit in this confusion, as they often present themselves as trusted information conduits, and have been oblique about the way they shape our contributions into their offerings. And as law scholar Frank Pasquale has noted, “policymakers could refuse to allow intermediaries to have it both ways, forcing them to assume the rights and responsibilities of content or conduit. Such a development would be fairer than current trends, which allow many intermediaries to enjoy the rights of each and responsibilities of neither.”

Reforming Section 230

There are many who, even now, strongly defend Section 230. The “permissionless innovation” it provides arguably made the development of the web, and contemporary Silicon Valley, possible; some see it as essential for that to continue. As legal scholar David Post remarked, “No other sentence in the US Code… has been responsible for the creation of more value than that one.” But among defenders of Section 230, there is a tendency to paint even the smallest reconsideration as if it would lead to the shuttering of the internet, the end of digital culture, and the collapse of the sharing economy. Without Section 230 in place, some say, the risk of liability will drive platforms either to remove everything that seems the slightest bit risky, or to turn a blind eye. Entrepreneurs will shy away from investing in new platform services because the legal risk would appear too costly.

I am sympathetic to this argument. But the typical defense of Section 230, even in the face of compelling concerns like harassment and terrorism, tends to adopt an all-or-nothing rhetoric. It’s absurd to suggest that there’s no room between complete legal immunity offered by a robust Section 230 without exception, and total liability for platforms as Section 230 crumbles away.

It’s time that we address a missed opportunity when Section 230 was drafted. Safe harbor, including the right to moderate in good faith and the freedom not to moderate at all, was an enormous gift to the young internet industry. Historically, gifts of this enormity were fitted with a matching obligation to serve the public in some way: the monopoly granted to the telephone company came with the obligation to serve all users; broadcasting licenses have at times been fitted with obligations to provide news, weather alerts, and educational programming.

The gift of safe harbor could finally be fitted with public obligations—not external standards for what to remove, but parameters for how moderation should be conducted fairly, publicly, and humanely. Such matching obligations might include:

  • Transparency obligations: Platforms could be required to report data on the process of moderation to the public or to a regulatory agency. Several major platforms voluntarily report takedown requests, but these typically focus on government requests. Until recently, none systematically reported data on flagging, policy changes, or removals made on their own accord. Facebook and YouTube began to do so this year, and should be encouraged to continue.

  • Minimum standards for moderation: Without requiring that moderation be handled in a particular way, minimum standards for the worst content, minimum response times, or obligatory mechanisms for redress or appeal could help establish a base level of responsibility and parity across platforms.

  • Shared best practices: A regulatory agency could provide a means for platforms to share best practices in content moderation, without raising antitrust concerns. Outside experts could be enlisted to develop best practices in consultation with industry representatives.

  • Public ombudsman: Most major platforms address the public through their corporate blogs, when announcing policy changes or responding to public controversies. But this is on their own initiative and offers little room for public response. Each platform could be required to have a public ombudsman who both responds to public concerns and translates those concerns to policy managers internally; or a single “social media council” could field public complaints and demand accountability from the platforms.

  • Financial support for organizations and digital literacy programs: Major platforms like Twitter have leaned on non-profit organizations to advise and even handle some moderation, as well as to mitigate the socio-emotional costs of the harms some users encounter. Digital-literacy programs help users navigate online harassment, hate speech, and misinformation. Enjoying safe harbor protections of Section 230 might require platforms help fund these non-profit efforts.

  • An expert advisory panel: Without assuming regulatory oversight of a government body, a blue-ribbon panel of regulators, experts, academics, and activists could be given access to platforms and their data to oversee content moderation, without revealing platforms’ inner workings to the public.

  • Advisory oversight from regulators: A government regulatory agency could consult on and review the content moderation procedures at major platforms. By focusing on procedures, such oversight could avoid the appearance of imposing a political viewpoint; the review would focus on the more systemic problems of content moderation.

  • Labor protections for moderators: Content moderation at large platforms depends on crowdworkers, either internal to the company or contracted through third-party temporary services. Guidelines could ensure these workers basic labor protections like health insurance, assurances against employer exploitation, and greater care for the psychological harm that can be involved.

  • Obligation to share moderation data with qualified researchers: The safe harbor privilege could come with an obligation to set up reasonable mechanisms for qualified academics to access platform moderation data, so they might investigate questions the platform might not think to or want to answer. The new partnership between Facebook and the Social Science Research Council has yet to work out details, but some version of this model could be extended to all platforms.

  • Data portability: Social media platforms have resisted making users’ profiles and preferences interoperable across platforms. But moderation data like blocked users and flagged content could be made portable so it could be applied across multiple platforms.

  • Audits: Without requiring complete transparency in the moderation process, platforms could build in mechanisms for researchers, journalists, and even users to conduct their own audits of the moderation process, to understand better the rules in practice.

  • Regular legislative review: The Digital Millennium Copyright Act stipulated that the Library of Congress revisit the law’s exceptions every two years, to account for changing technologies and emergent needs. Section 230, and whatever matching obligations might be fitted to it, could similarly be reexamined to account for the changing workings of social media platforms and the even more rapidly changing nature of harassment, hate, misinformation, and other harms.

We desperately need a thorough, public discussion about the social responsibility of platforms. This conversation has begun, but too often it is hamstrung between the defenders of Section 230 and those concerned by the harms it may shield. Until the law is rethought, social media platforms will continue to enjoy the right but not the responsibility to police their sites as they see fit.

This essay is excerpted from Custodians of the Internet by Tarleton Gillespie, published by Yale University Press and used by permission. Copyright c 2018 by Tarleton Gillespie.


More Great WIRED Stories

Leave a Reply

Your email address will not be published. Required fields are marked *