Facebook Shuts Down Project Analyzing the Impact of Misinformation in Political Ads on the Platform

This is not a great look for Facebook.

Earlier in the week, Facebook announced that it had been forced to cut off a group of NYU researchers from accessing Facebook’s internal usage data, because the NYU team had failed to adhere to the platform’s more stringent research usage conditions, which it implemented in the wake of the Cambridge Analytica scandal a few years back. 

As explained by Facebook:

“For months, we’ve attempted to work with New York University to provide three of their researchers the precise access they’ve asked for in a privacy-protected way. Today, we disabled the accounts, apps, Pages and platform access associated with NYU’s Ad Observatory Project and its operators after our repeated attempts to bring their research into compliance with our Terms.”

Facebook further noted that the NYU team, which had been researching the spread of misinformation via political ads on the platform specifically, had been using “unauthorized means” to access and collect data from Facebook users, which is in violation of its Terms of Service.

“We took these actions to stop unauthorized scraping and protect people’s privacy in line with our privacy program under the FTC Order.”

Which seems to make sense – no one wants another Cambridge Analytica debacle, and given the more complex conditions imposed on such by the FTC, as part of its punishment of Facebook over the CA data leak, of course, Facebook is keen to stay within the rules, and ensure that absolutely no potential misuse is allowed to occur.

The problem is, the FTC never imposed any such conditions.

As the FTC has explained today, the agreement that it established with the company “does not bar Facebook from creating exceptions for good-faith research in the public interest”.

As explained by Samuel Levine, the Acting Director of the FTC Bureau of Consumer Protection, via an open letter to Facebook CEO Mark Zuckerberg:

I write concerning Facebook’s recent insinuation that its actions against an academic research project conducted by NYU’s Ad Observatory were required by the company’s consent decree with the Federal Trade Commission. As the company has since acknowledged, this is inaccurate. The FTC is committed to protecting the privacy of people, and efforts to shield targeted advertising practices from scrutiny run counter to that mission.”

So if it wasn’t because of the FTC order, maybe Facebook was just being extra cautious – or maybe it simply misinterpreted the ruling and it will now re-enable the NYU research.

Or, as some have suggested, maybe the NYU team was getting a little too close to revealing potentially damaging findings into the impact that Facebook ads can have in regards to spreading political misinformation.

As noted, the NYU team was specifically focused on measuring the impacts of political ads, and the messaging they present, and how Facebook users respond to such, essentially measuring their potential impact on voting outcomes.

Following the Trump campaign, which weaponized Facebook ads through the use of divisive, emotion-charged messaging, the concern is that Facebook’s advanced ad tools can, in the wrong hands, provide a significant advantage for those willing to bend the truth in their favor, by targeting people’s key concerns and pain points with manipulative, if not downright false, messaging, which can then be amplified at huge scale.

As a reminder, while Facebook does fact-check regular posts on its platform, it does not fact-check political ads, a potentially glaring omission in its process.

In order to measure the potential impacts of this, the NYU Ad Observatory project built a browser extension, which, when installed, then collects data about the ads that each user is shown on Facebook, including specific information as to how those ads have been targeted. That process, which is somewhat similar to how Cambridge Analytica gathered data on Facebook usage, spooked Facebook, which sent a cease and desist letter to the NYU team in October last year, calling on them to shut it down. The NYU team refused, and while Facebook did allow them to keep using the extension up till now, The Social Network has reassessed, leading to this latest action to stop them from collecting data.

To be fair, Facebook does say that such info is already available via its Ads Library, but the NYU team says that this is incomplete, and inaccurate in some cases, therefore not providing a full view of the potential impacts.

But even so, Facebook, overall, seems to be in the right, despite incorrectly pointing to the FTC order as the main cause (Facebook almost immediately clarified this claim). But again, the concern that many have highlighted is that Facebook could really be looking to halt potentially unflattering data which could highlight the role that it plays in the distribution of misinformation, leading to incidents like the Capitol Riots and other acts of political dissent. 

So does the data available thus far show that Facebook ads are misleading the public?

There have been various analyses of the available NYU data set, some showing that Facebook is failing to label all political ads, despite its expanded efforts, and another showing that Facebook is still allowing some ads using discriminatory audience targeting to run, even though it supposedly removed these categories from its targeting. 

The NYU data set has also revealed more advanced insights into how politicians are looking to target specific audiences, as reported by Bloomberg:

“For instance, the [NYU dataset] revealed that Jon Ossoff, a Georgia Democrat, targeted Facebook users who were interested in topics such as former president Barack Obama, comedian Trevor Noah and Time magazine during his campaign for US Senate. His opponent, former Republican Senator David Perdue, targeted users who liked Sean Hannity’s show on Fox News.”

That additional insight could prove invaluable for learning how political candidates might be focusing on specific audiences, and how that can alter their response – which is a key element in then developing ways to stop the misuse of such, and avoid messaging manipulation going forward.

It seems, then, like Facebook should allow the project to continue, especially given the impacts of misinformation in the current COVID vaccine rollout. But it’s decided to shut it down.

Is that helpful, overall? Probably not, but it could help Facebook protect its reputation, even with the PR hit that it’s now taking for cutting off their access.

In the end, however, we don’t have any definitive answers. Sure, the NYU team does now have a fairly sizeable dataset to analyze, which could still reveal dangerous trends to watch, and mitigate in future. But more transparency is the key to eliminating the spread of false narratives, and seeding dangerous conspiracies and other untruths in the voting public.

Facebook, ideally, should want to contribute to this, and learn from the results. But either it’s too risky, given the user data access it requires, or it’s too damaging, with Facebook potentially ending up looking a lot worse as a result.

We don’t know the definitive reason, but as noted, right now, it’s not the best look for The Social Network. 

Source: www.socialmediatoday.com, originally published on 2021-08-05 20:33:59