Our latest AI advancements
Introducing Muse Spark
Introducing
Muse Spark
Safety and Preparedness Report
Introducing Muse Spark
Segment Anything 3

With SAM 3, you can use text and visual prompts to precisely detect, segment and track any object in an image or video.
DINOv3

DINOv3 scales self-supervised learning to train a powerful, more versatile model
More from Meta's FAIR Team
Meta Motivo
Movie Gen
Seamless Communication
AI Chemistry
Alignment and preparedness
As we chart the path towards superintelligence, AI tools can deeply understand your world and help you get things done faster. With that, reliability, security and user protections are more important than ever.What is the Advanced AI Scaling Framework?
What is the Advanced AI Scaling Framework?
What risks does the framework focus on?
What risks does the framework focus on?
How do you test your models for safety?
How do you test your models for safety?
How has your safety approach evolved?
How has your safety approach evolved?
What is a Preparedness Report?
What is a Preparedness Report?
Try experimental demos
How Meta is applying cutting-edge AI research to real-world interactions.


Create video cutouts and effects with a few clicks
For researchers and developers
Meta FAIR is advancing research and delivering breakthroughs in a variety of areas.01.
Perception
01.
Perception
The north star goal of our Perception research teams is to enable general AI systems to perceive the visual world to inform action, communication and generation. To achieve this goal, we're developing next generation perception models capable of understanding images and videos not as pixels, but as a capture of visual entities like people, objects, activities and their spatial and temporal relationships.
02.
Communication & Language
02.
Communication & Language
We advance AI capabilities in expressive communication, social interaction and use of language. Through foundational research in natural language processing and multimodal AI, we develop systems that enable more natural, meaningful interactions between humans and machines.
03.
Embodiment & Actions
03.
Embodiment & Actions
We advance the fundamental capabilities needed for AI to understand and act within the physical and digital world. From robots that can move around and interact with objects, to helping accomplish household tasks, to wearable glasses that understand the real and digital world, we hope to unlock a wide variety of future agents that help humans do more throughout all aspects of their lives.
04.
Alignment
04.
Alignment
Our research focuses on aligning models and decisions with human intent and societal interests through deeper fundamental understanding and enhanced steerability and efficiency of AI models. The pillar is at the forefront of research on AI for science and AI for society.
05.
Core Learning & Reasoning
05.
Core Learning & Reasoning
We conduct fundamental research in pre-training methods and new architectural paradigms that enable foundational models to learn and reason with agility and efficiency across novel downstream challenges. Our work expands the frontier of approaches such as world models, non-autoregressive architectures and memory-augmented models to unlock new capabilities in adaptive intelligence.