Facebook’s Oversight Board and the challenges of making policy we can trust

Nicolas Suzor
12 min readMay 27, 2019

Facebook has announced plans for an independent oversight board to help make decisions about what should be allowed on the site. In a Draft Charter, Facebook explains that the Board will offer a new way for users to appeal content decisions, and will have some role in shaping global policy for the site’s two billion users.

Image: Alex Haney

After years of criticism about Facebook’s content policies, the company hopes that the new Oversight Board will provide an independent guide to help it draw the line between acceptable and prohibited content. Facebook’s Founder and Chief Executive Mark Zuckerberg revealed his vision for the Oversight Board last year in a podcast:

“You can imagine some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world.”

Facebook’s proposed oversight board is designed to enhance the trust people have in the rules of the platform. At a time of intense scrutiny on Facebook’s content moderation processes, the Oversight Board is a promising step in the right direction. There are, however, important challenges that Facebook will have to overcome in order to make this move successful.

Facebook’s legitimacy problem

The challenge, for Facebook and other social media platforms, is finding a way to make their rules legitimate. Users are losing their confidence in the platform’s content moderation system — for too long the rules have been hidden, difficult to understand, and riddled with contradictions and errors. When these rules are enforced in ways that are not understandable, without a good system of due process and appeal, they often look biased. Surveys of users who have had their content removed show that, in the absence of good explanations, they are quick to jump to conspiracy theories — that social media platforms are biased themselves, or are acting in the interests of other major corporations and governments.

The end goal, if Facebook wants its users to trust its processes, has to be to create a system of governance that is seen to be legitimate. This means that Facebook has to find a way to address substantive disagreements about what its rules should be.

Part of the answer here is to work more closely with democratic governments around the world to set the standards that should apply in their countries. Recently, Zuckerberg praised efforts by France to work with Facebook to implement stronger standards for hate speech: “We can make progress on enforcing the rules, but at some level the question of what speech should be acceptable and what is harmful needs to be defined by regulation, by thoughtful governments that have a robust democratic process”.

By outsourcing the obligation to decide what content should be prohibited, platforms can gain some legitimacy and avoid the need to make and justify the rules themselves.

“in order for people to trust the internet overall and over time, there needs to be the right regulation put in place” — Mark Zuckerberg

But there’s a limit to how much legitimacy can come from the actual laws of different countries. The problem is that Governments can only really set standards for illegal content: threats of violence, hate speech, child exploitation, copyright infringement, bullying, and so on. These are categories of speech that nations can prohibit, and they can rely on tech companies to enforce the rules on the content that users post.

The rules for unlawful content are a low bar — a minimum threshold. They’re the lowest common denominator that a society agrees is unacceptable.

The hardest parts of the controversies over content moderation are not about content that is illegal. They’re disagreements about what sort of content should be allowed on major platforms, where they are made available and promoted to large audiences.

Back in 2013, for example, advocacy groups were concerned that Facebook wasn’t removing jokes and memes about rape. WAM! and a group of civil society organisations and activists launched a campaign targeting advertisers that ultimately convinced Facebook to make changes to its rules. This type of change would be very difficult to do through legislation — it’s best addressed by the platforms themselves.

Image: Battered Women’s Support Services.

Facebook and the other major platforms are not governments. They are not required by international human rights law to provide a home for all opinions. Content moderation is a large part of the value that social media platforms provide: they promise a way for their users to engage with content that is meaningful to them. A Facebook that treats all content equally would not be very useful — it would quickly be overrun by spam, trolling, abuse, and hate, and useful updates would be lost in the flood.

Platforms have to set rules for content that their users are allowed to post. These rules are in addition to the minimum standards set by laws of the various countries where Facebook operates. Because Facebook and other social media platforms can’t rely on governments to set all the rules for them, they need another system to create rules that are accepted as legitimate by their users and their critics. The Oversight Board is part of Facebook’s answer to this problem — Facebook hopes that an independent body will help it make better decisions that are easier to justify.

Dealing with predictable disagreements on hard cases

One challenge Facebook will have to deal with is to work out how the Oversight Board will come to agreement on difficult questions. Zuckerberg envisions that the Oversight Board will comprise a community “that reflects the social norms and values of people all around the world.” This is going to be an impossible challenge, unless Facebook is able to set out an explicit set of principles that show the community what it stands for.

There’s no universally correct answer to many of the hard cases that social media platforms are increasingly expected to resolve. Making decisions about content often requires making decisions between conflicting interests. These are value judgments, and people legitimately disagree about whose rights should prevail.

Take, for example, Australian editorial cartoonist Bill Leaks’ work. In his 2016 cartoon published by the Australian newspaper, Leak depicted an Indigenous man who could not remember his son’s name. The cartoon generated a national controversy over racism in Australian mainstream media. Indigenous groups complained that the cartoon reflects the ongoing and deeply entrenched racism against Australia’s Indigenous population and misrepresents and insults caring indigenous fathers. Others defended the free speech rights of the cartoonist to make political commentary, even racist commentary.

Bill Leak’s 2016 cartoon depicts an Indigenous man, holding a beer, who could not remember his own son’s name. You can view the cartoon and read more about the controversy here.

When the cartoon is shared to Facebook, how should an independent body make a decision between these two positions? Presumably, some members of Facebook’s Oversight Board will be strong advocates of freedom of expression, and others will be more sensitive to how racist speech works to silence marginalized groups and perpetuates a harmful culture of discrimination.

The Oversight Board has to have some mechanism to resolve this dispute — does it take a vote? Will the answer come down to who happens to be selected for a panel? So far, content policies at major US-based platforms have been heavily influenced by First Amendment principles, in part because early policy teams happened to be staffed by US lawyers. The increased attention from other countries, with very different values and priorities, means that this approach won’t work well in the future.

Facebook’s Draft Charter notes that it will include a set of values to guide the Oversight Board’s decision-making processes. Included in this list are “concepts like voice, safety, equity, dignity, equality and privacy.” But this list is pretty broad — and, as evelyn douek writes, a “list that includes everything prioritizes nothing”.

Many major social media platforms have trouble with these types of value judgments: they try to prioritize neutrality, and avoid making decisions about sensitive political topics. This apparent neutrality, though, is causing problems. None of these social media platforms operate in a neutral environment — and neutral tools that do not actively take inequality into account will almost inevitably contribute to the amplification of inequality.

Zuckerberg has explained that Facebook does not want to be an arbiter of truth, and it wants to “give every person a voice.” At the same time, Facebook wants to balance its principles of free expression against its commitment to safety for everyone. This balance causes real conflicts that continue to play out every day the problems that tech companies are facing now — including racism, misogyny, and bigotry, anti-vaccination content, misinformation, self-harm, and climate change denial — all require difficult judgments about when one person’s speech is harmful to others.

These aren’t value judgments that can easily be resolved solely by a vote between members of an external board. Facebook needs a more useful set of principles that can guide the Board’s decisions.

If the Oversight Board is a type of Supreme Court, it needs a Bill of Rights.

Ultimately, Facebook needs to make a decision about how far it will go to limit free speech in order to prevent harm. It will need a yardstick to help the Oversight Board make a determination about when, exactly, it considers one person’s speech rights to stifle the speech rights of marginalized communities.

Developing a set of meaningful, operationalizable principles will require Facebook to take a stronger public position about acceptable speech than it has in the past. What’s clear is that these issues will not go away by platforms continuing to try to remain ‘neutral’. Like it or not, platforms play a role in amplifying or fighting inequality and potentially harmful speech, and if they are going to navigate these debates, they’re going to need to articulate a more detailed vision of what they stand for.

Understanding local cultural contexts

Whatever form Facebook’s Oversight Board takes, it will need to include people with deep subject matter expertise on a wide range of topics and many different cultural contexts.

As Thomas Kadri and Kate Klonick have pointed out, “The difference between a racist slur and a rap lyric, for example, might turn on the speaker’s identity, her motivations, her audience. These challenges become even more complex in a global context in which moderators must account for different languages and slang; for different historical, cultural and political divides; and for different power structures — all of which might color the social meaning of the speech.”

Many Rohingya fled to the Kutupalong Refugee Camp in Bangladesh, near the border with Myanmar. Image by John Owens for Voice of America.

The lack of local knowledge is something that Facebook has been struggling with and improving for a while now. The most notorious example is Facebook’s role in helping to circulate hate speech that contributed to the government-sanctioned murder of thousands of Rohingya people in Myanmar. At the same time, Rohingya activists complained that their content, including news about military atrocities, was being repeatedly censored by Facebook.

Facebook expanded into Myanmar without understanding the propaganda and hate speech that fueled religious and ethnic tensions until it was far too late. Facebook’s rules don’t directly discriminate against the Rohingya, but, in practice, the moderation system reflects and reinforces established patterns of discrimination. In an environment where minority voices are already marginalized, and are likely being flagged for review at a greater rate than hate speech against them, Facebook should have expected that counter-speech might be disproportionately silenced and that extremist content might flourish.

Facebook has worked hard to make improvements in relation to Myanmar, but the problem of understanding local contexts for a platform of two billion people will continue to be an important challenge in the future.

These problems of localization are going to be incredibly tough for an Oversight Board to deal with. Facebook acknowledges that a relatively small group of people will not be sufficient to comprehensively address the range of local cultural problems that Facebook faces every day. Fortunately, Facebook’s Oversight Board doesn’t have to do all of the work of understanding local context by itself. Facebook itself has been hiring more diverse moderation and policy teams, and it also works with a range of civil society organizations and academics on difficult policy issues. If it wants to ensure that it is better able to understand local issues, these efforts should be seriously expanded, and Facebook should work hard to make sure that the Oversight Board and its policy teams are well supported by people with substantial expertise on difficult issues as they arise.

Making real changes to policy

How much power will Facebook’s Oversight Board have? When it disagrees with a decision that is technically correct, will it be able to change Facebook policy? Facebook do not have a clear answer to this question yet — its draft charter explains that the Board’s decisions will be binding only on the particular content it considers, and that Facebook might seek the Board’s advice on policy matters.

The Oversight Board can only really handle a small number of cases a year. Its value is not in fixing the inevitable and routine mistakes that Facebook’s moderators make. The day to day work of resolving disputes that provide the public with confidence that decisions are validly made has to be done by a comprehensive appeals system that is large enough to handle it. More work needs to be done, but Facebook, like other social media platforms, has made real improvements to its appeals systems in recent years, and now provides a level of due process for people who think a decision has been wrongly made.

The Oversight Board might be able to shed some light on systematic errors that it sees, but it’s far too small to be able to undertake real review work. A board can resolve mistakes in individual cases when existing appeals processes have failed, but it can’t do this at scale. The real value of the Oversight Board is in its ability to identify persistent, systemic problems and to recommend changes to policy.

This is where things get tricky for Facebook. A real Supreme Court has the ability to make decisions about categories of things a government cannot do without violating its constitution. Supreme Courts are often frustrating for governments — they are designed to get in the way of policy decisions where those policy decisions would interfere with people’s fundamental rights. It’s not clear that Facebook is ready to accept an Oversight Board with that much power.

A few weeks ago, Facebook removed an advertising campaign run in support of a breast cancer advocacy organization. The images themselves don’t breach Facebook’s content policies, which were changed in 2013 to explicitly allow post-mastectomy pictures after a petition from survivors. But they do breach Facebook’s stricter advertising policies, which prohibit “Excessive visible skin or cleavage, even if not explicitly sexual in nature”.

This is an example of a controversial decision that, according to Facebook’s current rules, was correctly made. If the Oversight Board were to review the decision and came to the conclusion that Facebook’s advertising rules were too strict, would it be empowered to change them? This is an extremely sensitive topic for Facebook, which makes the bulk of its money from advertising.

Facebook has said that the Oversight Board will have some role in influencing policy, but how this plays out will really have a major impact on whether people come to trust the Board as an independent authority with real power. Because it’s unlikely that the Board will have direct influence over policy-making, the process that Facebook puts in place to accept or reject any recommendations will have to be transparent and justifiable in order to generate trust among users.

Building a new constitution

Facebook is in uncharted waters here. It seems to be taking seriously its role in setting and enforcing the rules for how billions of people communicate. It’s trying to create a system that is worthy of the trust of its users, and this is going to take some time to evolve and get right.

This is a constitutional moment’ for Facebook and the other major platforms: we are at a point where users are demanding real legitimacy from the technology companies that shape our social environments. This means real and justifiable limits on how these companies set policies and make decisions, as well as working mechanisms of due process and accountability.

There are no great examples for how a massive company like Facebook can reinvent a legitimate system of governance. The Oversight Board can be part of the answer, but it can’t be the whole answer. Creating a constitutional system that people can trust will require Facebook to integrate the Oversight Board with other checks and balances that limit its power in order to protect its users.

Facebook needs not just a supreme court, but an entire court system, as well as a clearer set of constitutional values that are specific enough to guide future policy, a better way to get input from its users about its rules, and a healthy working fourth estate — independent and informed voices in the media, academia, NGOs, and government that can hold it to account. We don’t really what how these checks and balances might look like, and we’re only at the start of this journey, but there’s no doubt that now is the time for more bold new ideas and experimentation to help make massive social platforms worthy of our trust.

[ Written by Nicolas Suzor and Rosalie Gillett. Nic’s new book, Lawless: The Secret Rules that Govern our Digital Lives, is out with Cambridge University Press in July. Read a full draft PDF for free. ]

--

--

Nicolas Suzor

I study the governance of the internet. Law Professor @QUTLaw and @QUTDMRC; Member of @OversightBoard. All views are my own. Author: Lawless (July 2019).