Today at Connect 2024, we shared updates for Meta AI features and released Llama 3.2, a collection of models that includes new vision capabilities as well as lightweight models that can fit on mobile devices. With the rapidly evolving AI landscape, we recognize the importance of sharing our responsibility and safety approach with everyone—whether you’re a developer building with Llama or you’re using Meta AI experiences to help learn, create, and connect with the things and people that matter to you.
Llama 3.2 is built on the same foundation as our recent 3.1 release, which includes a multilayered safety approach, tools for developers, and extensive testing and evaluations work. Across all of our 3.2 models, we applied pre-training data mitigations to ensure a base level of safety. We also did thorough risk assessments of our fine-tuned Llame 3.2 models, tested the models’ performance, and fine-tuned features to help ensure the models are safe and reliable. This work includes conducting red-teaming exercises, such as cybersecurity and adversarial machine learning, and safety evaluations for fine-tuned models. Because Llama 3.2 models now include vision capabilities, we added additional measures to our safety program:
We’ve developed our new image and voice features with safety and privacy in mind. In regions where people are able to upload images to Meta AI, we’ve taken steps to prevent Meta AI from being used to identify people in those images, such as safety-tuning to help detect prompts that ask Meta AI to identify who is in an image and output filtering to help prevent responses. We built safeguards to help protect against image edits resulting in harmful or inappropriate content. Because Meta AI now supports voice, we expanded our deletion controls so that voice transcriptions from Meta AI chat history can be deleted at any time.
People should know when they’re seeing and interacting with AI-related content. When people first begin to use our generative AI features, we have introductory, in-product experiences to help explain how to best use them. Images generated or edited by Meta AI’s Imagine feature also include visual watermarks to make it clear that these images have been generated with AI. Invisible watermarks and metadata are embedded within the image files as additional layers of transparency. And, we recently joined the C2PA steering committee to continue this important work.
We know people may have questions about the information generative AI features are trained on and how that information is used. We use things like public posts and commentsfrom Instagram and Facebook to develop and improve our AI products and tools, in addition to other kinds of data, like publicly available and licensed information from across the internet. We use the information that people share when interacting with our generative AI features, like Meta AI. We’re also clear about what we don’t use—for example, we did not train our Llama 3.2 models using posts or comments with an audience other than public. You can learn more about the data we collect and how we use your information by visiting our guide about AI at Meta in our Privacy Center as well as the Meta AI Terms of Service and our Privacy Policy.
Our latest updates delivered to your inbox
Subscribe to our newsletter to keep up with Meta AI news, events, research breakthroughs, and more.
Join us in the pursuit of what’s possible with AI.
Foundational models
Latest news
Foundational models