YouTube Outlines its Evolving Efforts to Combat the Spread of Harmful Misinformation

YouTube has provided a new overview of its evolving efforts to combat the spread of misinformation via YouTube clips, which sheds some light on the various challenges that the platform faces, and how it’s considering its options in managing these concerns.

It’s a critical issue, with YouTube, along with Facebook, regularly being identified as a key source of misleading and potentially harmful content, with viewers sometimes taken down ever-deeper rabbit holes of misinformation via YouTube’s recommendations.

YouTube says that it is working to address this, and is focused on three key elements in this push.

The first element is catching misinformation before it gains traction, which YouTube explains can be particularly challenging with newer conspiracy theories and misinformation pushes, as it can’t update its automated detection algorithms without a significant amount of content to go on to train its systems.

Automated detection processes are built on examples, and for older conspiracy theories, this works very well, because YouTube has enough data to feed in, in order to train its classifiers on what they need to detect and limit. But newer shifts complicate matters, presenting a different challenge.

YouTube says that it’s considering various ways to update its processes on this front, and limit the spread of evolving harmful content, particularly around developing news stories.  

“For major news events, like a natural disaster, we surface developing news panels to point viewers to text articles for major news events. For niche topics that media outlets might not cover, we provide viewers with fact check boxes. But fact checking also takes time, and not every emerging topic will be covered. In these cases, we’ve been exploring additional types of labels to add to a video or atop search results, like a disclaimer warning viewers there’s a lack of high quality information.

That, ideally, will expand its capacity to detect and limit emerging narratives, though this will always remain a challenge in many respects.

The second element of focus is cross-platform sharing, and the amplification of YouTube content outside of YouTube itself.

YouTube says that it can implement all the changes it wants within its app, but if people are re-sharing videos on other platforms, or embedding YouTube content on other websites, that makes it harder for YouTube to restrict its spread, which leads to further challenges in mitigating such.

“One possible way to address this is to disable the share button or break the link on videos that we’re already limiting in recommendations. That effectively means you couldn’t embed or link to a borderline video on another site. But we grapple with whether preventing shares may go too far in restricting a viewer’s freedoms. Our systems reduce borderline content in recommendations, but sharing a link is an active choice a person can make, distinct from a more passive action like watching a recommended video.

This is a key point – while YouTube wants to restrict content that could promote harmful misinformation, if it doesn’t technically break the platform’s rules, how much can YouTube work to limit such, without over-stepping the line?

If YouTube can’t limit the spread of content through sharing, that’s still a significant vector for harm, so it needs to do something, but the trade-offs here are significant.

“Another approach could be to surface an interstitial that appears before a viewer can watch a borderline embedded or linked video, letting them know the content may contain misinformation. Interstitials are like a speed bump – the extra step makes the viewer pause before they watch or share content. In fact, we already use interstitials for age-restricted content and violent or graphic videos, and consider them an important tool for giving viewers a choice in what they’re about to watch.

Each of these proposals would be seen by some as overstepping, but they could also limit the spread of harmful content. At what point, then, does YouTube become a publisher, which could bring it under existing editorial rules and processes?

There are no easy answers in any of these categories, but it’s interesting to consider the various elements at play.

Lastly, YouTube says that it’s expanding its misinformation efforts globally, due to varying attitudes and approaches towards information sources.

“Cultures have different attitudes towards what makes a source trustworthy. In some countries, public broadcasters like the BBC in the U.K. are widely seen as delivering authoritative news. Meanwhile in others, state broadcasters can veer closer to propaganda. Countries also show a range of content within their news and information ecosystem, from outlets that demand strict fact-checking standards to those with little oversight or verification. And political environments, historical contexts, and breaking news events can lead to hyperlocal misinformation narratives that don’t appear anywhere else in the world. For example, during the Zika outbreak in Brazil, some blamed the disease on international conspiracies. Or recently in Japan, false rumors spread online that an earthquake was caused by human intervention.

The only way to combat this is to hire more staff in each region, and create more localized content moderation centers and processes, in order to factor in regional nuance. Though even then, there are considerations as to how restrictions potentially apply across borders – should a warning shown on content in one region also appear in others?

Again, there are no definitive answers, and it’s interesting to consider the varying challenges YouTube faces here, as it works to evolve its processes.

You can read YouTube’s full overview of its evolving misinformation mitigation efforts here.

Source: www.socialmediatoday.com, originally published on 2022-02-17 13:00:45