GRAPHICS

COMPUTER VISION

Make-An-Animation: Large-Scale Text-conditional 3D Human Motion Generation

August 10, 2023

Abstract

Text-guided human motion generation has drawn significant interest because of its impactful applications spanning animation and robotics. Recently, application of diffusion models for motion generation has enabled improvements in the quality of generated motions. However, existing approaches are limited by their reliance on relatively small-scale motion capture data, leading to poor performance on more diverse, in-the-wild prompts. In this paper, we introduce Make-An-Animation, a text-conditioned human motion generation model which learns more diverse poses and prompts from large-scale image-text datasets, enabling significant improvement in performance over prior works. Make-An-Animation is trained in two stages. First, we train on a curated large-scale dataset of (text, static pseudo-pose) pairs extracted from image-text datasets. Second, we fine-tune on motion capture data, adding additional layers to model the temporal dimension. Unlike prior diffusion models for motion generation, Make-An-Animation uses a U-Net architecture similar to recent text-to-video generation models. Human evaluation of motion realism and alignment with input text shows that our model reaches state-of-the-art performance on text-to-motion generation.

Download the Paper

AUTHORS

Written by

Samaneh Azadi

Akbar Shah

Devi Parikh

Sonal Gupta

Thomas Hayes

Publisher

ICCV

Research Topics

Graphics

Computer Vision

Related Publications

November 11, 2025

COMPUTER VISION

SYSTEMS RESEARCH

CATransformers: Carbon Aware Transformers Through Joint Model-Hardware Optimization

Irene Wang, Mostafa Elhouishi, Ekin Sumbul, Samuel Hsia, Daniel Jiang, Newsha Ardalani, Divya Mahajan, Carole-Jean Wu, Bilge Acun

November 11, 2025

October 19, 2025

COMPUTER VISION

Enrich and Detect: Video Temporal Grounding with Multimodal LLMs

Shraman Pramanick, Effrosyni Mavroudi, Yale Song, Rama Chellappa, Lorenzo Torresani, Triantafyllos Afouras

October 19, 2025

October 19, 2025

RESEARCH

NLP

Controlling Multimodal LLMs via Reward-guided Decoding

Oscar MaƱas, Pierluca D'Oro, Koustuv Sinha, Adriana Romero Soriano, Michal Drozdzal, Aishwarya Agrawal

October 19, 2025

September 23, 2025

RESEARCH

NLP

MetaEmbed: Scaling Multimodal Retrieval at Test-Time with Flexible Late Interactions

Zilin Xiao, Qi Ma, Mengting Gu, Jason Chen, Xintao Chen, Vicente Ordonez, Vijai Mohan

September 23, 2025

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.