Meta Implements New Changes to Housing, Employment and Credit Ads to Eliminate Potential Discrimination

Meta may soon change its approach to COVID-19 misinformation, with the platform calling on its Oversight Board to rule on how it should police COVID-related posts moving forward.

As explained by Meta:

“Misinformation related to COVID-19 has presented unique risks to public health and safety over the last two years and more. To keep our users safe, while still allowing them to discuss and express themselves on this important topic, we broadened our harmful misinformation policy in the early days of the outbreak in January 2020.”

Meta says that, as a result of this expansion, which has seen it beef up its policies to remove all false claims about masking, social distancing, and the transmissibility of the virus, it’s removed more than 25 million pieces of content since the start of the pandemic.

But now, with the COVID threat reducing – or at least, becoming less of a focus as a result of the vaccine rollout worldwide – Meta says that it may need to take a step back from removing all content that falls under its current enforcement banner.

Meta is fundamentally committed to free expression and we believe our apps are an important way for people to make their voices heard. But some misinformation can lead to an imminent risk of physical harm, and we have a responsibility not to let this content proliferate. But resolving the inherent tensions between free expression and safety isn’t easy, especially when confronted with unprecedented and fast-moving challenges, as we have been in the pandemic. That’s why we are seeking the advice of the Oversight Board in this case. Its guidance will also help us respond to future public health emergencies.”

In essence, Meta’s asking the Board to rule on whether it should continue removing such content outright, or if it should now scale back to other options, ‘like labeling or demoting it either directly or through our third-party fact-checking program.

Which, in some ways, seems a little strange, given the acknowledgment that such misinformation can cause harm, and how Meta’s massive scale and reach can further amplify these claims.  

Shouldn’t Meta just not let that content be shared in its apps indefinitely? If the science is settled, as Meta has established by putting in the current blocks, then there should be no change – unless, of course, the scale of work required to police such content is too much to handle ongoing.

Which is a concern in itself. If Meta’s not in a position to be able to stop the spread of misinformation, then that seems problematic, and something that should be addressed in another way. Part of the problem with the rise of climate change skepticism, for example, is that the mainstream media has allowed counter-scientific arguments to be shared via their platforms and publications, under the premise of providing ‘alternative’ viewpoints.

But there can’t be alternative perspectives on scientific fact. It’s unlikely that you’d see a mainstream publication sharing a report about how gravity doesn’t exist, or how the weather is controlled by human emotions. So why is climate change, which is agreed on by the vast, vast majority of the global scientific community, still viewed by many as being ‘non definitive’?

The capacity for people to share and engage with such arguments, at Facebook’s scale, is likely a key reason for this, and with that in mind, Meta should be referring to its own statements here, and its responsibility not to let misinformation that can lead to an imminent risk of physical harm to proliferate – not review the current standards to see whether it can effectively ease off now that things feel more settled.

Because the COVID crisis is still ongoing – 38,000 Americans are still being hospitalized by the virus every week, and 198,000 people have died from COVID in 2022 alone.

That doesn’t seem like the ideal time to be reviewing policies around such.

Source: www.socialmediatoday.com, originally published on 2022-07-26 19:24:10