May 19, 2021
The speed at which AI has evolved over the last decade means it’s easy to overlook the significance of individual developments along the way. Things have changed so fast that what seemed like a milestone just a couple of years ago is already outdated. But to understand the progress, it’s important to note those milestones. And as Facebook releases new data today showing AI’s increasing role in enforcing our Community Standards, I wanted to talk about one of those developments.
Last year, our AI team rolled out a new system for automatically predicting whether content on our platforms violates our Community Standards. It was capable of doing something that no automated system could do before: It looked at content more holistically, including images, video, text, and comments in multiple languages, and evaluated it against multiple types of policy violations. It also analyzed the content over time, based on how people interacted with it.
Previous systems for detecting harmful content could each do one pretty specific thing — one could analyze English-language text posts and predict the likelihood they contain hate speech, while another could look at photos and predict the probability they depict violent acts. Each could work on one narrow problem area.
To help figure out whether an individual piece of content violated our rules, our software stitched together the output of all these different AI systems — over the years, we’ve built thousands of them — and made a recommendation. As you can imagine, that’s a lot of stitching, and it has fundamental limitations.
With this new system now able to analyze the larger picture on a deeper level, our ability to automatically detect and remove violating content before anyone sees or reports it has increased significantly.
It has also helped the teams of people who manually review content have a much larger impact. With an improved ability to evaluate content, our automated tools are now doing a much better job identifying priority cases to be sent for human review. In 2020, tests of this new approach led to significant gains in the efficiency of our prioritization system, as our AI tools helped us focus our moderators’ time on the most valuable, higher-impact decisions.
That alone is a significant advance for us, but it reflects a much bigger technological shift. This progress isn’t unique to AI for content moderation, and it isn’t unique to Facebook. You hear similar stories from people across the world who are building cutting-edge AI: Its capabilities are evolving in similarly holistic ways.
I recently heard Andrej Karpathy, the head of AI at Tesla, describe the evolution of the company’s self-driving systems. Previously, he said, images from each of the car’s multiple cameras and sensors would be analyzed individually, with different AI models identifying features like stop signs and lane markings. Then the output of all those systems would be stitched together by software designed to build up an overall model of what’s happening.
Today, he said, there’s a much more holistic approach. The car’s AI system ingests input from all those cameras and sensors, and outputs a model of the surrounding environment — the nearby cars and pedestrians, the lane markings, and the traffic lights. And then the software on top applies the rules, like braking for a red light. Over time, the AI has taken on more and more of the work, and produced a deeper, more complete understanding of the environment.
We’ve seen similar dynamics happen with the AI systems in use at Facebook, including the ones deployed to keep our platforms safe. For example, our AI systems can now build a more holistic view of a group or a page by combining an assessment of multiple posts, comments, images, and videos over time. This allows for a much more sophisticated approach than what was possible even a year ago, when AI was limited to evaluating individual pieces of content on a standalone basis.
This evolution of AI isn’t just helping Facebook enforce our community standards — it’s also driving progress across many of the hardest challenges in AI. Our computer vision tools are developing a much deeper understanding of images and video, and our translation systems are making leaps in their ability to comprehend multiple languages at once.
Most important, this increasing sophistication shows no sign of slowing down — in fact, research breakthroughs made over the last year suggest that an extraordinary period of progress in AI is still ahead of us.
Chief Technology Officer
Foundational models
Latest news
Foundational models