Responsible AI
FAIR progress and learnings across socially responsible AI research
November 30, 2023

Today, we’re excited to share more about the approach we’re taking in Meta’s Fundamental Artificial Intelligence Research (FAIR) team to build socially responsible AI. AI models don’t exist in isolation—they’re part of an ecosystem that humans interact with and within every day. It has also become increasingly crucial to make our models not only fair and robust but also transparent so together we contribute to building AI that works well for everyone.

We’re sharing updates from three categories of socially responsible AI research at FAIR: technical understanding, contextual understanding, and evaluation and benchmarking. Increasing technical understanding helps deduce when, why, and how models display responsibility concerns. Building inclusive and context-aware models can further ground understanding in diverse social contexts. And building holistic benchmarks and evaluation tools allows the field to measure and track progress.

Building tools and tests for fairness

Enhancements across fairness research can help produce AI innovation that works well for everyone, regardless of demographic characteristics. That’s why FAIR is building tests and tools that aim to minimize potential bias and help to enable AI inclusivity and accessibility.

We’re continuing our work to create and distribute more diverse datasets that represent a wide range of people and experiences. We released the Casual Conversations v2 dataset, a consent-driven, publicly available resource that enables researchers to better evaluate the fairness and robustness of certain types of AI models. We also introduced FACET (FAirness in Computer Vision EvaluaTion), a new comprehensive benchmark for evaluating the fairness of computer vision models across classification, detection, instance segmentation, and visual grounding tasks.

In the growing subfield of large language models (LLMs), bias and toxicity metrics cover different demographic axes and text domains. Using just one metric doesn’t provide a full picture so we developed Robust Bias Evaluation of Large Generative Language Models (ROBBIE), a tool that compares six different prompt-based bias and toxicity metrics across 12 demographic axes and five different LLMs. The combination of these metrics enables a better understanding of bias and toxicity in the models that are being compared. It also allows us to explore the frequency of demographic terms in the texts on which an LLM trains and provides insight into how this could affect potential model biases, as described in our paper.

Representative generative AI would help enable consistent and realistic generation of content that is diverse and inclusive. DIG In focuses on evaluating gaps in quality and diversity of content generated from text-to-image models between geographic regions. After auditing five state-of-the-art text-to-image models using DIG In, our results suggest that progress in image generation quality has come at the cost of real-world geographic representation. The insights we gathered helped identify important areas for improvement, such as reducing background stereotypes or ensuring diversity prompting doesn’t hurt image consistency.

Fairness and privacy often can be considered in conflict with each other since most fairness methods need access to sensitive information. We’ve developed a new paradigm for evaluating group fairness that uses social networks to alleviate this issue. The key observation of this work is that homophily in social networks lets us define group fairness without access to any group information. This allows us to mitigate unfair machine learning outcomes by adjusting outcomes according to the similarity of users that is induced by the network structure. Importantly, this approach works without access to group information and without ever inferring sensitive information. As such, social network information helps to respect the privacy of users while enabling fair machine learning.

Promoting transparency, safety, and responsibility

Generative AI is empowering people to quickly create vibrant videos, images, audio, text, and more—all based on an inputted prompt. These new creative tools are also inspiring people to share their creations with friends, family, and followers on social media. While there is much to be excited about, it’s important that we do our part to reduce the possibility of people abusing these tools.

We developed Stable Signature, a new watermarking method to distinguish when images are generated by open-source AI. While the watermark is invisible to the naked eye, it can be detected by algorithms—even if the content has been edited. We include similar watermarks in speech samples generated by SeamlessM4T v2, our foundational translation model for text and speech. The watermarking enables pinpointing AI-generated segments within a longer audio snippet. This precision is particularly important for speech, where modifying a single word may change the entire meaning of a phrase. We further detail our watermarking approach for images, speech, and text models in recent releases.

What’s next

Our socially responsible AI efforts are propelled by a cross-disciplinary team whose mission is to help ensure that research at FAIR benefits people and society. The key here is collaborating with the entire AI community, from corporations to academia, to consistently share and align on metrics and benchmark considerations. Even the best responsible AI research would lack impact unless adopted and supported by the broader AI community. That’s why we’ve partnered with MLCommons to collaborate across the AI Safety Working Group as we work jointly with industry and university leaders to develop AI safety tests and further define standard AI safety benchmarks.

We’re also working with the Partnership on AI (PAI) and support the Framework for Collective Action on Synthetic Media and Guidance for Safe Foundation Model Deployment. As the field continues to evolve, we know we can’t do this alone. Further collaboration will be essential to provide the safest and most responsible AI research.


Share:

Our latest updates delivered to your inbox

Subscribe to our newsletter to keep up with the latest AI news, events and research breakthroughs from Meta.

Join us in the pursuit of what’s possible with AI.

Related Posts
Computer Vision
Introducing Segment Anything: Working toward the first foundation model for image segmentation
April 5, 2023
FEATURED
Research
MultiRay: Optimizing efficiency for large-scale AI models
November 18, 2022
FEATURED
ML Applications
MuAViC: The first audio-video speech translation benchmark
March 8, 2023