February 23, 2022
AI powers back-end services like personalization, recommendation, and ranking that help enable a seamless, customizable experience for people who use our products and services. But understanding how and why AI operates can be difficult for everyday users and others. We’re aiming to change that.
At Meta, we believe it’s important to empower people with tools and resources that help them to understand how AI shapes their product experiences, which is why we’ve committed to Transparency and Control as one of our five pillars of Responsible AI. One of the ways we are exploring increased explainability is through model and system documentation. Today, we are sharing the next step in this journey by publishing a prototype AI System Card tool that is designed to provide insight into an AI system’s underlying architecture and help better explain how the AI operates.
This inaugural AI System Card outlines the AI models that comprise an AI system and can help enable a better understanding of how these systems operate based on an individual’s history, preferences, settings, and more. The pilot System Card we’ve developed, and continue to test, is for Instagram feed ranking, which is the process of taking as-yet-unseen posts from accounts that a person follows and then ranking them based on how likely that person is to be interested in them.
Making AI more explainable is a cross-industry, cross-disciplinary dialogue. Companies, regulators, and academics are all testing ways of better communicating how AI works through various forms of guidance and frameworks that can empower everyday people with more AI knowledge.
Because AI systems are complex, it is both important and challenging to develop documentation that consistently addresses people’s need for transparency and their desire for simplicity in such explanations. As such, data sheets, model cards, System Cards, and fact sheets have different audiences in mind. We hope that a System Card can be understood by experts and nonexperts and can provide a unique in-depth view into the very complex world of AI system to human interface in a repeatable and scalable way for Meta. Providing a framework that is technically accurate, able to capture the nuance of how AI systems operate at Meta’s scale, and is easily digestible for everyday people using our technologies is a delicate balance, especially as we continue to push the state of the art in the field.
With this AI System Card release and continued exploration, we hope to lay a foundation for, and continually iterate, what elements of the AI system should be discussed, at which intervention points, and with which audiences via simple, easily digestible external tools.
Many machine learning (ML) models are typically part of a larger AI system, a group of ML models, AI and non-AI technologies that work together to achieve specific tasks. Because ML models don’t always work in isolation to produce outcomes, and models may interact differently depending on what systems they’re a part of, model cards — a broadly accepted standard for model documentation — don’t paint a comprehensive picture of what an AI system does. For example, while our image classification models are all designed to predict what’s in a given image, they may be used differently in an integrity system that flags harmful content versus a recommender system used to show people posts they might be interested in.
After consulting with external experts both in the United States and abroad, Meta’s Responsible AI (RAI) team chose to explore System Cards as our initial approach to looking holistically across an AI system, versus one-off models. Feedback from parties the team consulted helped solidify this approach, as it emphasized the importance of understanding how the outputs of a model are used in a wider product, or downstream in other models, as well as what policy actions result from their use and what impact they have on groups of people who use a given AI-powered product or service.
The pilot project System Card, launched by Instagram’s Equity team builds on their prior introduction of Model Cards. As the initial driver for that workstream, Instagram’s models provided a well-documented case for us to exemplify how our systems actually work, bringing us here with an AI System Card to show, as well as tell, how the AI feed ranking system dynamically works to deliver a personalized experience.
While System Cards help explain how an AI system functions in a digestible format, they also have some limitations. Here we outline a few limitations we’re considering as we progress this work:
AI systems are designed to learn and change constantly, requiring continuous updates. And at Meta, many of our systems do not consist of AI exclusively; humans are in the loop as well. Even then, a single System Card may not be relevant in the same way to each person that sees it because we continue to test new experiences for our users. Time stamps letting people know when a System Card was last updated are one possible mitigation we’ll explore.
Our models do not exist in a vacuum, and while we continue to provide more transparency into how they work, they are always evolving. Our goal here is to make it easier to understand the principles behind what our systems recognize and recommend, rather than offer a playbook on what’s to succeed, or outline the rules of the platform. Translating highly technical information into general terms that can be widely understood, while still being accurate, is a challenge. The addition or deletion of a single word or phrase has the potential to compromise or invalidate the technical explanation. Because of Meta’s global scale, we also have to take into consideration language barriers and translation, as well as the difference in meaning that a technical term or code has once we’ve given it a simplified explanation.
The technology behind our systems are still powered by people — and we must diligently continue ensuring that everything is being done to make the systems as fair as possible. In Instagram’s introduction of Model Cards, the primary goal was to provide a specific set of checks along the way, to make sure teams were thinking about the unintended consequences of what they were launching before it impacted the community.
Revealing the exact workings of certain AI systems could compromise the systems’ security or open up a model to adversarial attacks, thus potentially harming the people who use our products. Too much information in some of our System Cards could give malicious actors enough knowledge about a system or model to reverse-engineer it, but we believe it’s important to educate our users about how our AI systems work, and we seek to strike the right balance of transparency.
System Cards can serve as an important step in the journey toward helping people understand what AI transparency looks like at Meta’s scale. As the industry evolves and discussions about model documentation and transparency continue, we will continue to identify other pilots to undertake and iterate on our approach over time, so we can reflect product changes, evolving industry standards, and expectations around AI transparency.
To learn more, check out this technical paper that explains the research that led to developing System Cards.
Content Designer
Product Designer
Software Engineer
Machine Learning Software Engineer
Foundational models
Latest news
Foundational models