June 27, 2025
Human communication involves a complex interplay of verbal and nonverbal signals, essential for conveying meaning and achieving interpersonal goals. To develop socially intelligent AI technologies, it is crucial to develop models that can both comprehend and generate dyadic behavioral dynamics. To this end, we introduce the Seamless Interaction Dataset, a large-scale collection of over 4,000 hours of face-to-face interaction footage from over 4,000 participants in diverse contexts. This dataset enables the development of AI technologies that understand dyadic embodied dynamics, unlocking breakthroughs in virtual agents, telepresence experiences, and multimodal content analysis tools. We also develop a suite of models that utilize the dataset to generate dyadic motion gestures and facial expressions aligned with human speech. These models can take as input both the speech and visual behavior of their interlocutors. We present a variant with speech from an LLM model and integrations with 2D and 3D rendering methods, bringing us closer to interactive virtual agents. Additionally, we describe controllable variants of our motion models that can adapt emotional responses and expressivity levels, as well as generating more semantically-relevant gestures. Finally, we discuss methods for assessing the quality of these dyadic motion models, which are demonstrating the potential for more intuitive and responsive human-AI interactions.
Written by
Vasu Agrawal
Akinniyi Akinyemi
Kathryn Alvero
Morteza Behrooz
Julia Buffalini
Fabio Maria Carlucci
Joy Chen
Junming Chen
Zhang Chen
Shiyang Cheng
Praveen Chowdary
Joe Chuang
Antony D'Avirro
Jon Daly
Ning Dong
Mark Duppenthaler
Cynthia Gao
Jeff Girard
Martin Gleize
Sahir Gomez
Srivathsan Govindarajan
Brandon Han
Sen He
Denise Hernandez
Yordan Hristov
Rongjie Huang
Hirofumi Inaguma
Somya Jain
Raj Janardhan
Qingyao Jia
Christopher Klaiber
Dejan Kovachev
Moneish Kumar
Hang Li
Yilei Li
Pavel Litvin
Wei Liu
Guangyao Ma
Jing Ma
Martin Ma
Xutai Ma
Lucas Mantovani
Sagar Miglani
Sreyas Mohan
Louis-Philippe Morency
Evonne Ng
Kam-Woh Ng
Tu Anh Nguyen
Amia Oberai
Benjamin Peloquin
Jovan Popovic
Omid Poursaeed
Fabian Prada
Alice Rakotoarison
Alexander Richard
Christophe Ropers
Safiyyah Saleem
Vasu Sharma
Alex Shcherbyna
Jie Shen
Anastasis Stathopoulos
Anna Sun
Paden Tomasello
Tuan Tran
Arina Turkatenko
Bo Wan
Chao Wang
Jeff Wang
Mary Williamson
Carleigh Wood
Tao Xiang
Yilin Yang
Zhiyuan Yao
Chen Zhang
Jiemin Zhang
Xinyue Zhang
Jason Zheng
Pavlo Zhyzheria
Jan Zikes
Michael Zollhoefer
Publisher
arXiv
May 14, 2025
Linnea Evanson, Christine Bulteau, Mathilde Chipaux, Georg Dorfmüller, Sarah Ferrand-Sorbets, Emmanuel Raffo, Sarah Rosenberg, Pierre Bourdillon, Jean Remi King
May 14, 2025
May 13, 2025
Marlène Careil, Yohann Benchetrit, Jean-Rémi King
May 13, 2025
April 17, 2025
Ansong Ni, Ruta Desai, Yang Li, Xinjie Lei, Dong Wang, Ramya Raghavendra, Gargi Ghosh, Daniel Li (FAIR), Asli Celikyilmaz
April 17, 2025
December 12, 2024
Melanie Sclar, Jane Yu, Maryam Fazel-Zarandi, Yulia Tsvetkov, Yonatan Bisk, Yejin Choi, Asli Celikyilmaz
December 12, 2024
Our approach
Latest news
Foundational models