We’re committed to providing developers with the best possible tools and resources to build secure AI applications. Developers can access our latest Llama Protection tools to be used when building with Llama by visiting Meta’s Llama Protections page, Hugging Face, or GitHub.
Helping the defender community leverage AI in security operations
At Meta, we use AI to strengthen our security systems and defend against potential cyber attacks. We’ve heard from the community that they want access to AI-enabled tools that will help them do the same. That’s why we’re sharing updates to help organizations evaluate the efficacy of AI systems in security operations and announcing the Llama Defenders Program for select partners. We believe this is an important effort to improve the robustness of software systems as more capable AI models become available.
Building new technology to enable private processing for AI requests
We’re sharing the first look into Private Processing, our new technology that will help WhatsApp users leverage AI capabilities for things like summarizing unread messages or refining them, while keeping messages private so that Meta or WhatsApp cannot access them. More information on our security approach to building this technology, including the threat model that guides how we identify and defend against potential attack vectors, can be found on our Engineering blog. We’re working with the security community to audit and improve our architecture and will continue to build and strengthen Private Processing in the open, in collaboration with researchers, before we launch it in product.
Looking ahead
We hope that the set of AI updates shared here will make it even easier for developers to build with Llama, help organizations enhance their security operations, and enable stronger privacy guarantees for certain AI use cases. We look forward to continuing this work and sharing more in the future.
Our approach
Latest news
Foundational models