Instagram has provided an update on the progress of its new Equity Team, which was formed in the wake of the #BlackLivesMatter protests in the US last year, with the stated intention of addressing systemic bias within Instagram’s internal and external processes.
Following the death of George Floyd at the hands of police, Instagram chief Adam Mosseri pledged to do more to address inequity experienced by people from marginalized backgrounds. That work, Mosseri noted, would include a review of all of Instagram’s practices, products and policies, in order to detect issues and improve its systems.
The Equity Team has since been focused on several key elements within the Instagram experience.
As explained by Instagram:
“Early work here includes extensive research with different subsets and intersections of the Black community to make sure we understand and serve its diversity. We’ve spoken with creators, activists, policy minds and everyday people to unpack the diversity of experiences people have when using the platform. We are also in the process of auditing the technology that powers our automated enforcement, recommendations and ranking to better understand the changes necessary to help ensure people do not feel marginalized on our platform.”
Algorithmic bias is a key element – any algorithm that’s based on user activity is also likely to reflect some level of bias relative to that input. As such, Instagram has been focused on educating its staff who work on its systems as to how their processes could be impacted by such.
“Over the last few months, the Equity team launched an internal program to help employees responsible for building new products and technologies factor in equity at every step of their work. The program, called the Equitable Product Program, was created to help teams consider what changes, big and small, they can make to have a positive impact on marginalized communities.”
Within this effort, Instagram has also implemented new Machine Learning Model Cards, which provide checklists designed to helps ensure that new ML systems are designed with equity top of mind.
“Model cards work similar to a questionnaire, and make sure teams stop to consider any ramifications their new models may have before they’re implemented, to reduce the potential for algorithmic bias. Model cards pose a series of equity-oriented questions and considerations to help reduce the potential for unintended impacts on specific communities, and they allow us to remedy any impact before we launch new technology. As an example, ahead of the US election, we put temporary measures in place to make it harder for people to come across misinformation or violent content, and our teams used model cards to ensure appropriate ML models were used to help protect the election, while also ensuring our enforcement was fair and did not have disproportionate impact on any one community.”
Again, this is a key element within any platform’s broader equity efforts – if the inputs for your algorithm are inherently flawed, the outcomes will be as well. That also means that social media platforms can play a key role in eliminating bias by removing it from algorithmic recommendations, where possible, and exposing users to a wider range of content.
The Equity Team has also been working to address concerns with “shadowbanning” and users feeling that their content has been restricted within the app.
Instagram says that the perceptions around alleged ‘shadowbans’ largely relate to a lack of understanding as to why people may be getting fewer likes or comments than before, while questions have also been raised around transparency, and Instagram’s related enforcement decisions.
In future, Instagram’s looking to add more explanation around such, which could help people better understand if and how their content has been affected.
“This includes tools to provide more transparency around any restrictions on a person’s account or if their reach is being limited, as well as actions they can take to remediate. We also plan to build direct in-app communication to inform people when bugs and technical issues may be impacting their content. In the coming months, we’ll share more details on these new features.”
That could solve a range of problems, beyond marginalized communities, with increased transparency making it totally clear why certain posts are getting less reach, and whether any limitations have been put into effect.
This is a key area of development for Instagram, and for Facebook more broadly, especially, as noted, in relation to machine learning and algorithmic models, which are based on current user behavior.
If the social platforms can establish key areas of bias within these systems, that could be a big step in addressing ongoing concerns, which could end up playing a key role in lessening systemic bias more broadly.
Instagram says that it will also be launching new initiatives to help amplify Black-owned businesses in future.
Source: www.socialmediatoday.com, originally published on 2021-04-26 16:41:14