Can Effective Regulation Reduce the Impact of Divisive Content on Social Networks?

Amid a new storm of controversy sparked by The Facebook Files, an expose of various internal research projects which, in some ways, suggest that Facebook isn’t doing enough to protect users from harm, the core question that needs to be addressed is often being distorted by inherent bias and specific targeting of Facebook, the company, as opposed to social media, and algorithmic content amplification as a concept.

That is, what do we do to fix it? What can be done, realistically, that will actually make a difference; what changes to regulation or policy could feasibly be implemented to reduce the amplification of harmful, divisive posts that are fueling more angst within society as a result of the increasing influence of social media apps?

It’s important to consider social media more broadly here, because every social platform uses algorithms to define content distribution and reach. Facebook is by far the biggest, and has more influence on key elements, like news content – and of course, the research insights themselves, in this case, came from Facebook.

The focus on Facebook, specifically, makes sense, but Twitter also amplifies content that sparks more engagement, LinkedIn sorts its feed based on what it determines will be most engaging. TikTok’s algorithm is highly attuned to your interests.

The problem, as highlighted by Facebook whistleblower Frances Haugen is algorithmic distribution, not Facebook itself – so what ideas do we have that can realistically improve that element?

And the further question then is, will social platforms be willing to make such changes, especially if they present a risk to their engagement and user activity levels?

Haugen, who’s an expert in algorithmic content matching, has proposed that social networks should be forced to stop using engagement-based algorithms altogether, via reforms to Section 230 laws, which currently protect social media companies from legal liability for what users share in their apps.

As explained by Haugen:

“If we had appropriate oversight, or if we reformed [Section] 230 to make Facebook responsible for the consequences of their intentional ranking decisions, I think they would get rid of engagement-based ranking.”

The concept here is that Facebook – and by extension, all social platforms – would be held accountable for the ways in which they amplify certain content. So if more people end up seeing, say, COVID misinformation because of algorithmic intervention, Facebook could be held legally liable for any impacts.

That would add significant risk to any decision-making around the construction of such algorithms, and as Haugen notes, that may then see the platforms forced to take a step back from measures which boost the reach of posts based on how users interact with such content.

Essentially, that would likely see social platforms forced to return to pre-algorithm days, when Facebook and other apps would simply show you a listing of the content from the pages and people you follow in chronological order, based on post time. That, in turn, would then reduce the motivation for people and brands to share more controversial, engagement-baiting content in order to play into the algorithm’s whims.

The idea has some merit – as various studies have shown, sparking emotional response with your social posts is key to maximizing engagement, and thus, reach based on algorithm amplification, and the most effective emotions, in this respect, are humor and anger. Jokes and funny videos still do well on all platforms, fueled by algorithm reach, but so too do anger-inducing hot takes, which partisan news outlets and personalities have run with, which could well be a key source of the division and angst we now see online.

To be clear, Facebook cannot solely be held responsible for such. Partisan publishers and controversial figures have long played a role in broader discourse, and they were sparking attention and engagement with their left-of-center opinions long before Facebook arrived. The difference now is that social networks facilitate such broad reach, while they also, through Likes and other forms of engagement, provide direct incentive for such, with individual users getting a dopamine hit by triggering response, and publishers driving more referral traffic, and gaining more exposure through provocation.

Really, a key issue in when considering the former outcome is that everyone now has a voice, and when everyone has a platform to share their thoughts and opinions, we’re all far more exposed to such, and far more aware. In the past, you likely had no idea about your uncle’s political persuasions, but now you know, because social media reminds you every day, and that type of peer sharing is also playing a role in broader division.

Haugen’s argument, however, is that Facebook incentivizes this – for example, one of the reports Haugen leaked to the Wall Street Journal outlines how Facebook updated its News Feed algorithm in 2018 to put more emphasis on engagement between users, and reduce political discussion, which had become an increasingly divisive element in the app. Facebook did this by changing its weighting for different types of engagement with posts.

The idea was that this would incentivize more discussion, by weighting replies more heavily – but as you can imagine, by putting more value on comments, in order to drive more reach, that also prompted more publishers and Pages to share increasingly divisive, emotionally-charged posts, in order to incite more reactions, and get higher share scores as a result. With this update, Likes were no longer the key driver of reach, as they had been, with Facebook making comments and Reactions (including ‘Angry’) increasingly important. As such, sparking discussion around political trends actually became more prominent, and exposed more users to such content in their feeds.

The suggestion then, based on this internal data, is that Facebook knew this, it knew that this change had ramped up divisive content. But they opted not to revert back, or implement another update, because engagement, a key measure for its business success, had indeed increased as a result.

In this sense, removing the algorithm motivation would make sense – or maybe, you could look to remove algorithm incentives for certain post types, like political discussion, while still maximizing the reach of more engaging posts from friends, catering to both engagement goals and divisive concerns.

That’s what Facebook’s Dave Gillis, who works on the platform’s product safety team has pointed to in a tweet thread, in response to the revelations.

As per Gillis:

At the end of the WSJ piece about algorithmic feed ranking, it’s mentioned – almost in passing – that we switched away from engagement-based ranking for civic and health content in News Feed. But hang-on – that’s kind of a big deal, no? It’s probably reasonable to rank, say, cat videos and baby photos by likes etc. but handle other kinds of content with greater care. And that is, in fact, what our teams advocated to do: use different ranking signals for health and civic content, prioritizing quality + trustworthiness over engagement. We worked hard to understand the impact, get leadership on board – yep, Mark too – and it’s an important change.

This could be a way forward, using different ranking signals for different types of content, which may work to enable optimal amplification of content, boosting beneficial user engagement, while also lessening the motivation for certain actors to post divisive material in order to feed into algorithmic reach.

Would that work? Again, it’s hard to say, because people would still be able to share posts, they’d still be able to comment and re-distribute material online, there are still many ways that amplification can happen outside of the algorithm itself.

In essence, there are merits to both suggestions, that social platforms could treat different types of content differently, or that algorithms could be eliminated to reduce the amplification of such material.

And as Haugen notes, focusing on the systems themselves is important, because content-based solutions open up various complexities when the material is posted in other languages and regions.

Source: www.socialmediatoday.com, originally published on 2021-10-09 23:00:00