FEATURED
Research
Celebrating 10 years of FAIR: A decade of advancing the state-of-the-art through open research
November 30, 2023

Today we’re celebrating the 10-year anniversary of Meta’s Fundamental AI Research (FAIR) team—a decade of advancing the state of the art in AI through open research. During the past 10 years, the field of AI has undergone a profound transformation, and through it all, FAIR has been a source of many AI research breakthroughs, as well as a beacon for doing research in an open and responsible way.

It’s FAIR’s dedication to responsibility, openness, and excellence that first drew me here six years ago. Like many others, I was won over by the promise of working with the best researchers in the world amid a culture of respect and integrity and with an ambition to do AI research that would transform the world for the better. I never looked back. Of course, I wasn’t the first one here—quite a few minds at Meta preceded me.

The past decade

The launch of FAIR dates back to late 2013. In those days, as today, the competition for AI talent was fierce. And Mark Zuckerberg himself made the trip to the NeurIPS conference to convince researchers to join this new research organization. Partnering with VP and Chief AI Scientist Yann LeCun, they assembled a team of some of the most talented researchers in the nascent field of deep learning. Over the years, hundreds of brilliant minds, conducting bleeding-edge research with far reaching impact, have joined the effort and enabled us to make progress on many of the hardest problems in AI.

It’s fascinating to see what a decade of progress looks like. Consider, for example, what’s happened in the world of object detection. It was only a little over 10 years ago that neural networks were able to recognize thousands of objects in images for the first time with AlexNet. Faster R-CNN brought us real-time object detection in 2015, followed by object instance segmentation with Mask R-CNN in 2017 and a unified architecture for instance and semantic segmentation with Panoptic Feature Pyramid Networks (FPN) in 2019. In the span of just seven years, FAIR contributed to tremendous progress on one of the most fundamental problems in AI. And in 2023, we can literally Segment Anything. Each of these moments directly resulted in a step change across several downstream applications and products created by our colleagues at Meta, as well as people around the world.

We have seen similar trajectories across many other problems in AI. Another great example is the last five years of our work on machine translation, where we were among the first to pioneer techniques for unsupervised machine translation, which allowed us to introduce a model for translation across 100 languages without relying on English. This led directly to our No Language Left Behind breakthrough, and most recently expanding text-to-speech and speech-to-text technology to more than 1,000 languages. To achieve these results, we foster a constant flow of ideas between our own research teams, the broader research community (to share datasets, tasks, and competitions), the product teams across Meta who deploy the technology to serve billions of people around the world, and external partners like Wikipedia who can benefit from this technology to enhance their own services.

It’s easy in retrospect to identify the contributions that pass the test of time. But earlier in the journey, there’s always much more uncertainty. For every breakthrough, there are hundreds of ideas that were explored but didn’t pan out. The timelines I described above are stripped down to just a few snapshots, but the reality is that the progression of research is far denser and messier. Successful research requires embracing that uncertainty, taking on calculated risk, and using our experience and intuition to pursue the most promising hypotheses. This requires vision, intuition, rigor, patience, resources and solid team work!

The present

This has been a phenomenal year for FAIR in terms of research impact. We opened the year with the release of Llama, an open pre-trained large language model. This was followed by several other releases, pushing the state-of-the-art beyond what we could imagine. Our work and researchers won best paper awards at several conferences, including at ACL, ICRA, ICML, and ICCV, covering most subareas of AI research. Our work was featured in news outlets across the world, and relayed millions of times on social media platforms. All of Meta leaned into our open source strategy for the launch of Llama 2. And at Connect, we unveiled new AI products and experiences that are now in the hands of millions of people—the culmination of early research work that was then magnified by Meta’s Generative AI and product teams.

The momentum shows no signs of slowing. Today, we announced new models, datasets, and updates spanning audio generation, translation, and multimodal perception. The successor to Voicebox, Audiobox ​​is advancing generative AI for audio by unifying generation and editing capabilities for speech, sound effects, and soundscapes with a variety of input mechanisms, including natural language prompts. Building on our work with SeamlessM4T, Seamless introduces a suite of AI language translation models that preserve expression and improve streamings. And Ego-Exo4D extends our work on egocentric perception with a foundational dataset and benchmark suite containing both ego- and exocentric views. While the egocentric perspective shows the participant’s point of view, the exocentric views reveal the surrounding scene and context. Together, these two perspectives give AI models a new window into complex human skill.

Meta is uniquely poised to solve AI’s biggest problems — not many companies have the resources or capacity to make the investments we have in software, hardware, and infrastructure to weave learnings from our research into products that billions of people can benefit from. FAIR is a critical piece to Meta’s success, and one of the only groups in the world with all the prerequisites for delivering true breakthroughs: some of the brightest minds in the industry, a culture of openness, and most importantly: the freedom to conduct exploratory research. This freedom has helped us stay agile and contribute to building the future of social connection.

The future

While much of the progress in AI over the last decade was achieved by a divide and conquer approach, breaking up the problem into separate well-defined tasks, in the next decade, we are increasingly looking at ways of putting the puzzle pieces together to advance AI. The rise in Foundation models is just the beginning of this: large models with increasingly general abilities, which we can flexibly adapt to our specific needs and values. World models, which can be used to reason and plan, will be increasingly common, allowing us to overcome limitations of current AI models. Rather than a single AGI, we expect the future to feature numerous and diverse populations of AIs deployed across platforms, which will transform how we work, how we play, how we connect, how we create, how we live.

Pursuing this path requires that we also have a deep understanding of how to build AI models responsibly, from beginning to end. We remain committed to doing that work safely and responsibly. Our commitment to open science is a key part of this, and will continue to be part of FAIR’s DNA. When we aim to share our work openly, whether it be our papers, code, models, demos or responsible use guides, it helps us set the highest standards of quality and responsibility, which is the best way for us to help the community build better AI solutions. This also directly helps Meta build AI solutions that are safer, more robust, equitable and transparent, and can benefit the many different people that use our products around the world.

As I look forward to the next decade for FAIR, I am inspired by our vision and ambition to solve the hardest, most fundamental problems in AI. I am thankful for the many teams and people across Meta who are contributing to our success. And I look forward to seeing what the future holds if we continue to push towards solving AI, while staying true to our culture of responsibility, excellence, and openness!



Written by:
Joelle Pineau
VP, AI Research

Share:

Our latest updates delivered to your inbox

Subscribe to our newsletter to keep up with the latest AI news, events and research breakthroughs from Meta.

Join us in the pursuit of what’s possible with AI.

Related Posts
FEATURED
Research
Introducing a suite of AI language translation models that preserve expression and improve streaming
November 30, 2023
FEATURED
Research
Introducing Ego-Exo4D: A foundational dataset for research on video learning and multimodal perception
November 30, 2023
FEATURED
Audiobox: Generating audio from voice and natural language prompts
November 30, 2023