Meta has published an in-depth examination of the company’s social media algorithms in an effort to demystify how material gets recommended to Instagram and Facebook users. In a blog post published on Thursday, Meta’s President of Global Affairs Nick Clegg stated that the information dump on the AI systems behind the company’s algorithms is part of the company’s “wider ethos of openness, transparency, and accountability,” and outlined what Facebook and Instagram users can do to better control what content they see on the platforms.
“With rapid advances taking place with powerful technologies like generative AI, it’s understandable that people are both excited by the possibilities and concerned about the risks,” Clegg wrote in his blog. “We believe that responding to those concerns in an open manner is the best way.”
The majority of the information is contained among 22 “system cards” that cover the Feed, Stories, Reels, and other ways that Meta’s social media platforms allow individuals to discover and consume material. Each of these cards contains thorough but understandable information about how the AI systems that power these features rank and recommend content. The overview of Instagram Explore, for example, illustrates the three-step process underlying the automated AI recommendation engine.
- Gather Inventory: The technology collects public Instagram content such as images and reels that follow the company’s quality and integrity standards.
- Leverage Signals: The AI system then evaluates how people have interacted with comparable information or interests, which are referred to as “input signals.”
- Rank Content: Finally, the system ranks the previous step’s content, moving content that it thinks will be of more interest to the user to a higher place inside the Explore tab.
Instagram users can influence this process by storing content (saying that the system should display you similar stuff) or labelling it as “not interested” to encourage the system to filter out similar content in the future, according to the card. Users can also view reels and photographs that the algorithm did not expressly select for them by selecting “Not personalised” in the Explore filter. More information regarding Meta’s predictive AI models, the input signals that guide them, and how frequently they are used to rank content can be found in its Transparency Centre.
In addition to the system cards, the blog post lists a few additional Instagram and Facebook features that help explain why users are seeing specific material and how they can modify their suggestions. In the “coming weeks,” Meta plans to expand the “Why Am I Seeing This?” feature to Facebook Reels, Instagram Reels, and Instagram’s Explore page. Users will be able to click on a specific reel to learn how their previous behaviour may have impacted the algorithm to present it to them. In addition, Instagram is developing a new Reels feature that will allow users to mark recommended reels as “Interested” in order to see related content in the future. Since 2021, the ability to designate information as “Not Interested” has been available.
In addition, Meta stated that it will begin rolling out its Content Library and API, a new set of tools for academics, in the coming weeks, with a slew of public data from Instagram and Facebook. Researchers will be able to apply for access to these tools through recognised partners, beginning with the University of Michigan’s Inter-university Consortium for Political and Social Research, and data from this library can be searched, analysed, and filtered. These tools, according to Meta, will enable “the most comprehensive access to publicly-available content across Facebook and Instagram of any research tool we have built to date,” as well as assisting the firm in meeting its data-sharing and transparency compliance duties.
These transparency requirements may be the most important element influencing Meta’s decision to further explain how it utilises AI to change the material we see and interact with. The rapid advancement of AI technology and its accompanying popularity in recent months has piqued the interest of authorities all over the world, who are concerned about how these systems acquire, manage, and use our personal data. Meta’s algorithms aren’t new, but the way it mishandled user data during the Cambridge Analytica debacle, as well as the reactions to TikTok’s sluggish transparency attempts, are probable motivators to overcommunicate.