Responsible AI
Driven by our belief that AI should benefit everyone
Our commitment to Responsible AI
Our Responsible AI efforts are propelled by our mission to help ensure that AI at Meta benefits people and society. Through regular collaboration with subject matter experts, policy stakeholders and people with lived experiences, we’re continuously building and testing approaches to help ensure our machine learning (ML) systems are designed and used responsibly.
We ground our work to ensure that AI is designed and used responsibly around a set of core values.
Protecting the privacy and security of people’s data is the responsibility of everyone at Meta.
Everyone should be treated fairly when using our products and they should work equally well for all people.
AI systems should meet high performance standards, and should be tested to ensure they behave safely and as intended.
People who use our products should have more transparency and control around how data about them is collected and used.
We build reliable processes to ensure accountability for our AI systems and the decisions they make.
One way we are addressing AI fairness through research is the creation and distribution of more diverse datasets. Datasets that are used to train AI models can reflect biases, which are then passed on to the system. But biases might also be due to what isn’t in the training data. A lack of diverse data — or data that represents a wide range of people and experiences — can lead to AI-powered outcomes that reflect problematic stereotypes or fail to work equally well for everyone.
Improving fairness will often require measuring the impact of AI systems on different demographic populations and mitigating unfair differences. Yet the data necessary to do so is not always available — and even when it is, collecting it and storing it can raise privacy concerns. After engaging with civil rights advocates and human rights groups that further confirmed the fairness challenges, we identified new approaches to help us access data with the potential to meaningfully measure the fairness of the AI models on our platforms across races.
A critical aspect of fairness is ensuring that people of all backgrounds have equitable access to information about important life opportunities, like jobs, credit, and housing. Our policies already prohibit advertisers from using our ad products to discriminate against individuals or groups of people. However, even with neutral targeting options and model features, factors such as people’s interests, their activity on the platform, or competition across all ad auctions for different audiences could affect how ads are distributed to different demographic groups. That’s why we’ve developed a novel use of machine learning technology to help distribute ads in a more equitable way on our apps.
In 2022, we assembled a cross-disciplinary team, including people from our Civil Rights, Engineering, AI Research, Policy, and Product teams, to better understand problematic content associations in several of our end-to-end systems and to implement technical mitigations to reduce the chance of them occurring on our platforms that use AI models.
As part of this collaborative effort, we carefully constructed and systematically reviewed the knowledge base of interest topics for usage in advanced mitigations that more precisely target the problematic associations. As more research is done in this area and shared with the greater community, we expect to build on this progress and to continue to improve our systems.
AI-driven feeds and recommendations are a powerful tool for helping people find the people and content they are most interested in, but we want to make sure that people can manage their experience in ways that don’t necessarily rely on AI-based ranking.
Because AI systems are complex, it is important that we develop documentation that explains how systems work in a way that experts and nonexperts alike can understand.
The rapid advance of emerging technologies makes it difficult to fully understand and anticipate how they might eventually impact communities around the world.
Our support of Open LoopFoundational models
Latest news
Foundational models