Inspired by FAIR
Roboflow estimates a 74-year time savings across its community from using Meta’s Segment Anything
October 24, 2024
4 minute read

Before Meta’s Segment Anything Model (SAM), Roboflow CEO Joseph Nelson remembers how time consuming it was for people to carefully click dot after dot around any object they wanted to segment. Segmenting objects—identifying the pixels in an image that correspond to an object of interest—is a necessary step in creating training data for some models. Until the release of Meta’s first SAM model in 2023, segmentation was a tedious task—and not always precise. SAM enabled both interactive and automatic image segmentation with unparalleled flexibility. Building on that success, Meta released SAM 2 in July 2024, which allowed for real-time, promptable object segmentation in both images and videos.

“SAM 2 already knows where the segments of most objects are, so users are creating custom datasets in a fraction of the time,” Nelson says. “We’ve seen SAM 1 and SAM 2 used on over 60 million polygons, and the time savings that we’ve seen from doing so ends up being about 74 years.”

SAM 2 single click mask being generated for data labeling using Roboflow Annotate.

Working with an open source model has allowed for a wider exploration of its capabilities and collaboration on its development, resulting in continuous improvement and new use cases emerging as more people engage with it. The SAM team’s engagement with researchers and users in the AI and broader tech communities has led to significant improvements between SAM 1 and SAM 2, demonstrating the value of open source access in the development and sharing of technology. This collaborative environment fosters transparency, community-driven solutions, and a vibrant ecosystem where creativity and collaboration lead to groundbreaking tools and applications.

As a company with a mission to help make the world more programmable, Roboflow uses SAM to help build systems that enable their customers to have visual understanding for just about anything. Roboflow’s systems are being used to aid recovery efforts in the wake of natural disasters, power instant replays at live sporting events, and streamline claims processing for insurance companies that are leveraging aerial imagery to evaluate damage. No matter their level of experience, people can create and deploy computer vision applications tailored to their needs for both commercial and social impact applications. You can interact with SAM 2 using Roboflow tools here.

SAM 2 video inference running with Roboflow Workflows.

Researchers, companies, and individuals building with Roboflow create their own models focused on the data most relevant to their field of work. Whether that means classification, object detection, image segmentation, or otherwise, people can quickly train some of their own models. With SAM, people are able to automatically label, prepare, and curate visual datasets much more quickly.

With over 500,000 publicly available datasets across approximately 350 million user-labeled images on Roboflow Universe, the stage is set for innovative tools like SAM to impact a multitude of industries. This highly adaptable technology can be applied to various industries, and as a result, Roboflow is seeing unexpected uses of SAM that showcase its versatility and broad relevance.

“Our customers produce electric vehicles, move critical goods across the US, produce ice cream—and many of them use SAM in manufacturing and logistics processes, ensuring their products meet their high quality standards before reaching consumers,” Nelson says. “So there’s all of these ways that we just couldn’t even anticipate the model being used and the impact it has made.”


SAM 2 positive and negative prompts for data labeling using Roboflow Annotate.

By enabling machines to understand visual data, our SAM series of models is opening up new possibilities for innovation and exploration. For example, at the Exploratorium museum in San Francisco, visitors can view tiny organisms under a microscope and and explore open-ended questions raised by their behavior. Elsewhere, it's being used to monitor fish populations and more accurately assess the effectiveness of coral reef restoration efforts.

“SAM has changed the rate at which people can create high-quality models for all sorts of so-called downstream tasks,” Nelson says. “SAM is aiding the Roboflow community in adding a sense of sight where it doesn’t already exist.”



Share:

Our latest updates delivered to your inbox

Subscribe to our newsletter to keep up with Meta AI news, events, research breakthroughs, and more.

Join us in the pursuit of what’s possible with AI.

Related Posts
Computer Vision
Introducing Segment Anything: Working toward the first foundation model for image segmentation
April 5, 2023
FEATURED
Research
MultiRay: Optimizing efficiency for large-scale AI models
November 18, 2022
FEATURED
ML Applications
MuAViC: The first audio-video speech translation benchmark
March 8, 2023