While it remains an experiment, Meta’s Oversight Board provides an interesting case study in third-party regulation of social platforms, and how official rules and regulations could help to ensure more uniformity, and fairness, within platform rulings.
Founded back in 2019, the Oversight Board is an independent group of experts to whom Meta and its users can refer appeals over platform and content decisions, providing another avenue for more complex concerns. The Board can then rule on each case, and make recommendations to Meta as to how it might update its policies in-step, which Meta doesn’t necessarily have to implement. But it provides at least some type of double-checking measure, even if it is essentially funded by Meta itself.
Which will continue to be the case, with Meta today announcing that it will contribute another $150 million to the Oversight Board Trust, enabling it to continue hearing cases, and helping to shape Meta’s policy approach.
“Under the terms of the Trust, the funds contributed by the company are irrevocable and can only be used to fulfil the Trust’s purpose of funding, managing, and overseeing the operation of the Oversight Board. This $150 million contribution to the Trust is in addition to the company’s prior contribution of $130 million announced in 2019 when the Trust was first established.”
As noted, the idea of the Oversight Board was to essentially take some of the more difficult decisions out of Meta’s hands, and serve as an example of how a Government-assigned body might be able to regulate platform decisions, as opposed to each individual company making up policy stances on the fly.
Meta has long called for more regulation on more difficult decisions around freedom of speech. The most high-profile case in this respect was Meta’s decision to ban former President Donald Trump from its platforms over Trump’s incendiary remarks around the results of the 2020 Election.
Meta referred the case to the Oversight Board, in the hopes that it would be able to wash its hands of responsibility for the Trump ban, but the Board ultimately put the onus back on Zuck and Co. to make the call, while also criticizing Meta for its unclear approach to such penalties.
“In applying a vague, standardless penalty and then referring this case to the Board to resolve, Facebook seeks to avoid its responsibilities. The Board declines Facebook’s request and insists that Facebook applies and justifies a defined penalty.”
That’s in line with US law, in relation to how private companies operate, and regulate what is and is not allowed on their platforms – which, in some ways, highlights the limitations of the Board, and the example that Meta is trying to present.
Ideally, Meta doesn’t want to be the bad guy in these cases, and by outsourcing it to a panel of lawyers and academics, that then reduces the onus on its teams to take tough stances. But the Board is also beholden to existing regulations, and what Meta would really like is for Governments around the world to see this limitation, and take on a more official, rule-setting role around such speech, which would then be applied to all digital platforms across the board, taking such calls out of its hands.
That’s the ultimate hope of the Oversight Board, that it demonstrates why this is a necessary development. But in the meantime, the Board can also provide policy guidance and secondary avenues for appeal for users, which can help to alleviate at least some pressure on Meta in making such calls.
The new funding will see the Board continue this work, and with 118 policy recommendations already submitted to Meta as a result of its cases heard, it is playing a role in helping to improve Meta’s policies, while also providing an illustrative example of the need for broader regulation.