TikTok Moves to Further Limit Potential Exposure to Harmful Content Through Automated Removals

TikTok is moving to further empower its automated detection tools for policy violations, with a new process that will see content that it detects as violating its policies on upload removed entirely, ensuring that no one ever sees it.

As TikTok explains, currently, as part of the upload process, all TikTok videos pass through its automated scanning system, which works to identify potential policy violations for further review by a safety team member. A safety team member will then let the user know if a violation has been detected – but at TikTok’s scale, that does leave some room for error, and exposure, before a review is complete.

Now, TikTok’s working to improve this, or at least, ensure that potentially violative material never reaches any viewers.

As explained by TikTok:

“Over the next few weeks, we’ll begin using technology to automatically remove some types of violative content identified upon upload, in addition to removals confirmed by our Safety team. Automation will be reserved for content categories where our technology has the highest degree of accuracy, starting with violations of our policies on minor safety, adult nudity and sexual activities, violent and graphic content, and illegal activities and regulated goods.”

So rather than letting potential violations move through, TikTok’s system will now block them from upload, which could help to limit harmful exposure in the app.

Which, of course, will see some false positives, leading to some creator angst – but TikTok does note that its detection systems have proven highly accurate.

“We’ve found that the false positive rate for automated removals is 5% and requests to appeal a video’s removal have remained consistent. We hope to continue improving our accuracy over time.”

I mean, 5%, at billions of uploads per day, may still be a significant number in raw figures. But still, the risks of exposure are significant, and it makes sense for TikTok to lean further into automated detection at that error rate.

And there’s also another important benefit:

“In addition to improving the overall experience on TikTok, we hope this update also supports resiliency within our Safety team by reducing the volume of distressing videos moderators view and enabling them to spend more time in highly contextual and nuanced areas, such as bullying and harassment, misinformation, and hateful behavior.”

The toll content moderation can take on staff is significant, as has been documented in several investigations, and any steps that can be taken to reduce such is likely worth it.

In addition to this, TikTok’s also rolling out a new display for account violations and reports, in order to improve transparency – and ideally, stop users from pushing the limits.

As you can see here, the new system will display violations accrued by each user, while it will also see new warnings displayed in different areas of the app as reminders of the same.

The penalties for such escalate from these initial warnings to full bans, based on repeated issues, while for more serious issues, like child sexual abuse material, TikTok will automatically remove accounts, while it can also block a device outright to prevent future accounts from being created.

These are important measures, especially given TikTok’s young user base. Internal data published by The New York Times last year showed that around a third of TikTok’s user base is 14 years old or under, which means that there’s a significant risk of exposure for youngsters – either as creators or viewers – within the app.

TikTok has already faced various investigations on this front, including temporary bans in some regions due to its content. Last year, TikTok came under scrutiny in Italy after a ten year-old girl died while trying to replicate a viral trend from the app. 

Cases like this underline the need for TikTok, specifically, to implement more measures to protect users from dangerous exposure, and these new tools should help to combat violations, and stop them from ever being seen.

TikTok also notes that 60% of people who have received a first warning for violating its guidelines have not gone on to have a second violation, which is another vote of confidence in the process.

And while there will be some false positives, the risks far outweigh the potential inconvenience in this respect.

You can read more about TikTok’s new safety updates here

Source: www.socialmediatoday.com, originally published on 2021-07-09 16:06:47