October 5, 2022
AI is the driving force behind a newly launched tool on Facebook that helps people make a direct change to the content they see in their Feed and influence the types of content that is recommended to them. When people are given the option to select Show More/Show Less on certain posts, we use AI to interpret the intent behind those actions in a nuanced way. This model is capable of taking relatively few data points and then generalizing about what we do know to determine whether to prioritize or deprioritize a piece of content for a particular person.
Clicking Show More on a post increases the ranking score for it and for similar content. Show Less decreases it. Since our model learns to generalize well, we can use these aggregate signals to help improve Feeds for everyone, even if they haven’t used Show More/Show Less. Building an AI model that could interpret these signals and generalize well for everyone was an interesting challenge because we needed to deduce intent and learn to generalize off of sparse data. Someone might click Show More/Show Less once a month, but they may like or comment on posts every day. With fewer data points, we still needed to train our model to predict preferences and to integrate it with our existing recommendation system.
Our recommendation algorithms already use thousands of signals to surface content that people are most likely to want to see. Show More/Show Less is uniquely helpful because it allows people to directly tell us what they want in their Feeds.
Building machine learning systems that personalize content for the people who use Meta’s products is a complex engineering task. We’ve previously shared more details about how our News Feed algorithm works. Likes and interactions are an important part of that, since they generate billions of examples and indirectly tell us what people like to see when they log on to Facebook. Show More/Show Less offers direct and more nuanced feedback. For example, there are cases where users like a post and then tell us they want to see less of it, which could mean they’ve seen enough content like that.
With Show More/Show Less feedback, we have data points that are a few orders of magnitude less than interactions. Adding to the challenge, Show More/Show Less feedback is also intermittent and not always present for each person, which means we don’t get the same level of past history to help us predict future behavior. This means we have to be more efficient with data and build a model that can generalize well about what we do know. For example, clicking Show Less on a cousin’s post about their new convertible might mean “show me fewer posts by that person” or “show me fewer posts about cars.” An effective system must be able to understand the intent behind the click.
To do this, we use deep learning models to generate user and post-level embeddings (sets of numbers), which help predict the types of content a person wants to see more of or less of in their Feed. A user embedding captures a person’s tastes, while the content embedding captures the essence of what a post is about. We use a neural network architecture to train these embeddings.
The challenge with training these embeddings is that deep learning models typically require a large amount of data to be trained effectively, while we want to be able to cater to a user’s tastes without them having to provide a lot of feedback. To solve this, we first pretrain the model on other related tasks, such as predicting what users would like or share. We then fine-tune the embeddings on Show More/Show Less data. This allows the model to transfer information learned on the “Like” data set and apply it to the Show More/Show Less task. Through this process, we are able to take advantage of the much larger data volume from likes, while still optimizing for the more precise Show More/Show Less signal.
For people who have provided Show More/Show Less feedback, their embedding captures that preference and takes it into account when predicting whether they would want to see more or less of a new post. For people who have not provided any feedback, we can still use this technique to generate the embeddings from their other interactions on Facebook and make better predictions of what they would like to see. This approach solves both the scenarios: giving people more control over their own Feed by providing feedback, and improving the experience for all users on Facebook through generalizing from users who provided feedback to other users with similar interaction history.
Show More/Show Less also presents another training challenge. Generally, we train models where user interaction is the positive label and lack of interaction is the negative label. For Show More/Show Less, since many people don’t often give feedback, using the same training strategy would yield a model that makes very low predictions for many users and thus would not meaningfully affect the ranking of their content. We instead trained the model with Show More as a positive and Show Less as a negative. The advantage is that now the model will make a call for every person (and every piece of content) about whether the person is more likely to choose Show More or Show Less if they were to rate that content.
These two signals are another way we can give people more control over the experience they have on Facebook. We are currently experimenting with targeted visibility when showing posts that ask for Show More/Show Less feedback. In the future, we want to present that option when it’s most useful for the person scrolling through their Feed.
We’re continuing to test new ways to customize how much content people see in Feed from the friends and family, groups, Pages, and public figures they’re connected to. For now, people can find these ranking levers — as well as others, such as Snooze, Unfollow, and Reconnect — in their Feed Preferences. We also plan to test this capability with Reels in the coming weeks. As with every product change we make, we’ll continue to use direct feedback and refine our approach to ensure that we’re offering the best possible experience.
Foundational models
Latest news
Foundational models