COMPUTER VISION

EvalGIM: A Library for Evaluating Generative Image Models

December 12, 2024

Abstract

As the use of text-to-image generative models increases, so does the adoption of automatic benchmarking methods used in their evaluation. However, while metrics and datasets abound, there are few unified benchmarking libraries that provide a framework for performing evaluations across many datasets and metrics. Furthermore, the rapid introduction of increasingly robust benchmarking methods requires that evaluation libraries remain flexible to new datasets and metrics. Finally, there remains a gap in synthesizing evaluations in order to deliver actionable takeaways about model performance. To enable unified, flexible, and actionable evaluations, we introduce EvalGIM (pronounced “EvalGym”), a library for evaluating generative image models. EvalGIM contains broad support for datasets and metrics used to measure quality, diversity, and consistency of text-to-image generative models. In addition, EvalGIM is designed with flexibility for user customization as a top priority and contains a structure that allows plug-and-play additions of new datasets and metrics. To enable actionable evaluation insights, we introduce “Evaluation Exercises” that highlight takeaways for specific evaluation questions. The Evaluation Exercises contain easy-to-use and reproducible implementations of two state-of-the-art evaluation methods of text-to-image generative models: consistency-diversity-realism Pareto Fronts and disaggregated measurements of performance disparities across groups. EvalGIM also contains Evaluation Exercises that introduce two new analysis methods for text-to-image generative models: robustness analyses of model rankings and balanced evaluations across different prompt styles. In this paper, we outline the EvalGIM library and provide guidance for how others can add new datasets, metrics, and visualizations to customize the library for their own use cases. We also demonstrate the utility of EvalGIM by using its Evaluation Exercises to explore several research questions about text-to-image generative models, such as the role of re-captioning training data or the relationship between quality and diversity in early training stages. We encourage text-to-image model exploration with EvalGIM and invite contributions at https://github.com/facebookresearch/EvalGIM/.

Download the Paper

AUTHORS

Written by

Melissa Hall

Oscar Mañas

Reyhane Askari

Mark Ibrahim

Candace Ross

Pietro Astolfi

Tariq Berrada Ifriqi

Marton Havasi

Yohann Benchetrit

Karen Ullrich

Carolina Braga

Abhishek Charnalia

Maeve Ryan

Mike Rabbat

Michal Drozdzal

Jakob Verbeek

Adriana Romero Soriano

Publisher

arXiv

Research Topics

Computer Vision

Related Publications

December 11, 2024

COMPUTER VISION

Video Seal: Open and Efficient Video Watermarking

Pierre Fernandez, Hady Elsahar, Zeki Yalniz, Alexandre Mourachko

December 11, 2024

December 11, 2024

NLP

COMPUTER VISION

Meta CLIP 1.2

Hu Xu, Bernie Huang, Ellen Tan, Ching-Feng Yeh, Jacob Kahn, Christine Jou, Gargi Ghosh, Omer Levy, Luke Zettlemoyer, Scott Yih, Philippe Brunet, Kim Hazelwood, Ramya Raghavendra, Daniel Li (FAIR), Saining Xie, Christoph Feichtenhofer

December 11, 2024

December 11, 2024

COMPUTER VISION

Measuring Deja Vu Memorization Efficiently

Narine Kokhlikyan, Bargav Jayaraman, Florian Bordes, Chuan Guo, Kamalika Chaudhuri

December 11, 2024

November 20, 2024

CONVERSATIONAL AI

COMPUTER VISION

Llama Guard 3 Vision: Safeguarding Human-AI Image Understanding Conversations

Jianfeng Chi, Ujjwal Karn, Hongyuan Zhan, Eric Smith, Javier Rando, Yiming Zhang, Kate Plawiak, Zacharie Delpierre Coudert, Kartikeya Upasani, Mahesh Pasupuleti

November 20, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.