What proportion of social media posts get moderated, and why?

Nicolas Suzor
4 min readMay 8, 2018

We have just released the Santa Clara principles, calling on platforms to provide better information about how they moderate content online.

The principles articulate a minimum set of standards for what information platforms should provide to users, what due process users should be able to expect when their posts are taken down or their accounts are suspended, and what data will be required to help ensure that the enforcement of company content guidelines is fair and unbiased. The three principles urge companies to:

  • publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines;
  • provide clear notice to all users about what types of content are prohibited, and clear notice to each affected user about the reason for the removal of their content or the suspension of their account; and
  • enable users to engage in a meaningful and timely appeals process for any content removals or account suspensions.

These principles were developed in collaboration digital rights organizations and civil society groups after the Content Moderation and Removal at Scale conference at Santa Clara University in February 2018. They incorporate research we’ve been working on as part of a grant from the Internet Policy Observatory, and my ongoing work funded by the Australian Research Council.

What proportion of content is removed?

As researchers, we need better information in order to study how well content moderation systems are working. As part of our research, we’ve been tracking how the content moderation processes of major platforms are actually working in practice. We’re using this information to evaluate these systems for bias, in a way that we can monitor improvements over time. We’ve created some very simple dashboards to help people explore this data — linked under each graph below.

This data gives us a rare overview of the scale of content moderation on major platforms. We can see, for example, that somewhere around 7–9% of tweets are no longer available two weeks after they have been posted. We can also see trends in content censored in certain countries (Turkey and Germany are the biggest censors of tweets):

Twitter removals dashboard

We can compare this to YouTube, where approximately 20-25% of videos are not available two weeks after they were posted. Generally around 7-10% of videos are removed as a result of copyright claims and breaches of YouTube’s terms of service. YouTube provides much better information that other platforms, and reports exactly why a video is no longer available:

YouTube removals dashboard

Other platforms, on the other hand, provide very limited information. We can see that somewhere between 10–20% of Instagram posts are removed from that platform, but we have no way of telling which were removed by Instagram itself and which were removed by the user. (In fact, it’s hard to even get a random sample from Instagram, so we can only make some basic approximations here.)

Instagram removals dashboard

This infrastructure works by creating a random sample and testing social media posts two weeks later. The platforms that provide meaningful notifications (digital ‘tombstones’ as Medium’s Alex Feerst calls them) help us to understand not just how much content has been removed, but why. Most importantly, though, this infrastructure then allows us to do more in-depth research: we can study content that has been removed in order to evaluate bias, monitor performance, and watch improvements over time. But the big differences in the level of information that different platforms provide also shows us a bit more where exactly more transparency is needed, and should challenge us to think about what we’ll do with it.

Making transparency meaningful

Over the last few months, major tech companies have been making major improvements in the information they release about their content moderation practices. Social media platforms have been facing a lot of criticism over recent years about potential bias in their policies and procedures, but until recently, few companies were willing to talk about their moderation processes.

In two recent announcements, Google and Facebook have recently set new standards for transparency in content moderation. Google has started to provide new information in transparency reports about terms of service enforcement on YouTube, and Facebook has provided detailed material that tries to explain how it actually enforces its community standards.

These are promising first steps, but there is still more work to do. We see the Santa Clara principles as another step in the continuous improvement of the social and technical systems that govern our lives. They provide a minimum standard, not just for disclosure, but also for appeals processes.

As more companies make this information available, there will be more work for the rest of us to do. Disclosing numbers about complaints and removals is important — without it, it’s hard to tell what platforms are actually doing. Ultimately, the goal is to help make sure that the policies of platforms are created and enforced in a way that promotes human rights. The next step will require us to actually make sense of this information in a way that helps users understand content moderation systems and helps platforms improve their processes.

--

--

Nicolas Suzor

I study the governance of the internet. Law Professor @QUTLaw and @QUTDMRC; Member of @OversightBoard. All views are my own. Author: Lawless (July 2019).