Pinterest has published its latest Transparency Report, which outlines all of the content it removed or otherwise took action on based on rule violations over the first half of 2021.
Which seems like a while ago, right? The first half of 2021 was now more than four months ago – but reporting delays aside, the report provides some interesting perspective on how Pinterest is being used, and the issues that it’s seeing, as it works to keep its community safe.
In terms of increases in content violations, Pinterest has reported a significant jump in Pins deactivated for conspiracy theories in Q2 2021:
“In Q1 2021, we deactivated 24,134 distinct images, which comprised 166,189 Pins, for violating this policy. In Q2 2021, we deactivated 16,204 distinct images, which comprised 1,148,947 Pins, for violating this policy. Of these Pins,95% were never seen by users in this reporting period.”
So there wasn’t a big jump in unique conspiracy content, but its prevalence skyrocketed, which is likely related to the broader COVID vaccine roll-out and the various concerns related to the pandemic.
Pinterest also saw an increase in Pin removals related to adult sexual services, with more than double the amount of content actioned in Q2 versus Q1, while it also saw a lot more deactivations for harassment and criticism – though many of these were in error:
“In Q1 2021, we deactivated 5,540 distinct images, which comprised 124,713 Pins, for violating this policy. In Q2 2021, we deactivated 7,238 distinct images, which comprised 1,238,782 Pins, for violating this policy. We determined that a small handful of these distinct images, and their more than 990,000 machine-identified matching Pins, were incorrectly deactivated, and we reinstated that content after spotting the error.”
Which points to the ongoing challenges in utilizing automated content ID systems for such purpose – without the nuance and judgment of a human, mistakes will happen. But then again, relying on humans leaves those workers susceptible to the impacts of viewing such, so there’s no ideal solution on this front.
Pinterest also saw an increase in removals related to its dangerous goods and activities policy, though many of these were due to a broader clean-up across the platform.
Going the other way, Pinterest saw big reductions in deactivations from Q1 to Q2 for Pins that shared graphic violence and threats, as well as self-injury and harmful behavior. Spam prevalence was also down, based on the data.
There are some interesting trends of note here, which largely reflect broader social media trends – though it is interesting to note such from a Pin-specific angle, with the platform generally not considered to be as impacted by these behaviors as other apps.
But Pinterest now has 444 million users, and as such, significant influence in its own right. Which, of course, attracts more ill-intentioned actors seeking to use that reach to boost their messaging, which is really why we need more uniform, industry-wide approaches to dealing with these issues and concerns to make it clearer what’s acceptable, from a broader perspective, and what should be enforced by these groups.
Pinterest, based on these numbers, is doing a good job to address these concerns, and stop such content from ever reaching users, but more universal agreement, and understanding, would enhance these efforts.