December 2, 2022
Many of the experiences people enjoy on Facebook and Instagram are powered by artificial intelligence (AI). A number of them, like Assistant, Avatars, and AR effects, cannot be powered by server-side AI due to latency, network bandwidth, and other constraints. Running AI on-device —that is, directly on a phone, tablet, or even a pair of smart glasses — offers huge advantages over constantly sending data back to a server. It’s faster, and it creates a privacy-enhancing experience for people who use our platforms. However, on-device AI presents new challenges, since it requires coping with devices that have a small battery, far less powerful processors, and less memory than a server in a data center.
Today, we are sharing more information about how Meta is using PyTorch, an open source machine learning framework we developed alongside the AI community that is now part of the Linux Foundation, to bring more AI-powered experiences to personal devices.
In order to provide the best AI-based product experiences, the AI models need to be optimized according to the different sets of constraints of the devices where they will be deployed. Teams iterate on the AI models to achieve state-of-the-art battery life, power, compute, size, and memory utilization.
This is where PyTorch can help. PyTorch has developed infrastructure that allows developers to execute AI models across a wide variety of devices with efficiency and performance. The PyTorch mobile runtime is small enough to fit across many mobile devices while supporting the variety of operations used to author those AI models with optimizations across different compute resources. In 2021, we announced the migration of all of our AI systems to PyTorch. PyTorch now powers the machine learning (ML) stack and workflow tools used across all Meta products to scale the adoption of on-device AI, using the same solution (PyTorch Mobile) available in the open source since release 1.9.
We have seen exponential growth in these on-device production use cases. On Meta’s family of mobile applications, PyTorch on-device powers:
70 billion daily inferences
50 on-device AI models across different mobile applications
All on-device ML in Instagram, Facebook, Messenger, and Meta Spark
Our on-device AI also powers several critical use cases across Meta’s mobile applications. Here are a few examples:
Real-time video calling: Our video background selection models let people select a blurred or unique AR background for their video calls, so they have a more private communication experience in their own space.
Privacy-preserving ML: Our models are used on-device to rank a user’s feed and friends list in a more private way on products like Messenger, keeping the data needed for these predictions on the person’s device.
Business integrity: Our text and image models detect and shut down cloaking feed ads, one way that malicious parties try to reach people on our platforms.
Smart Target Quick Promotion: Quick Promotion is a platform that enables Facebook to communicate with people in a timely, well-targeted way through products like Feed, notifications, and Messenger. This includes product information and public announcements. With PyTorch, we shipped a smarter targeting algorithm with on-device AI for quick promotions.
PyTorch also powers more engaging AI experiences on Instagram Reels, which people experience in the form of AR effects they can “try on” and use to create content that can do a variety of things, such as changing their background and adding fun AR effects and experiences to their selfies. On the recently launched Meta Quest Pro, Reality Labs leverages PyTorch to enable capabilities such as hand and eye tracking, Natural Facial Expressions, and tracked keyboard.
We believe AI is driving the creation of new products and experiences, and can further improve the capabilities of existing ones. With PyTorch bringing AI to run directly on the devices, this will only open the door to further innovation with increased reliability, interactivity, and privacy.
Based on our experience supporting applications and users across a wide variety of devices, we believe we can push the state of the art of on-device AI even further. Next year, we will be pushing the next-generation PyTorch framework for on-device AI, which will deliver a step function in efficiency, performance, and portability on more devices (including microcontrollers), all while maintaining consistency with the rest of PyTorch for ease of deployment.
Software Engineer
Product Manager
Engineering Manager
Foundational models
Latest news
Foundational models