HUMAN & MACHINE INTELLIGENCE

CONVERSATIONAL AI

Seamless Interaction: Dyadic Audiovisual Motion Modeling and Large-Scale Dataset

June 27, 2025

Abstract

Human communication involves a complex interplay of verbal and nonverbal signals, essential for conveying meaning and achieving interpersonal goals. To develop socially intelligent AI technologies, it is crucial to develop models that can both comprehend and generate dyadic behavioral dynamics. To this end, we introduce the Seamless Interaction Dataset, a large-scale collection of over 4,000 hours of face-to-face interaction footage from over 4,000 participants in diverse contexts. This dataset enables the development of AI technologies that understand dyadic embodied dynamics, unlocking breakthroughs in virtual agents, telepresence experiences, and multimodal content analysis tools. We also develop a suite of models that utilize the dataset to generate dyadic motion gestures and facial expressions aligned with human speech. These models can take as input both the speech and visual behavior of their interlocutors. We present a variant with speech from an LLM model and integrations with 2D and 3D rendering methods, bringing us closer to interactive virtual agents. Additionally, we describe controllable variants of our motion models that can adapt emotional responses and expressivity levels, as well as generating more semantically-relevant gestures. Finally, we discuss methods for assessing the quality of these dyadic motion models, which are demonstrating the potential for more intuitive and responsive human-AI interactions.

Download the Paper

AUTHORS

Written by

Vasu Agrawal

Akinniyi Akinyemi

Kathryn Alvero

Morteza Behrooz

Julia Buffalini

Fabio Maria Carlucci

Joy Chen

Junming Chen

Zhang Chen

Shiyang Cheng

Praveen Chowdary

Joe Chuang

Antony D'Avirro

Jon Daly

Ning Dong

Mark Duppenthaler

Cynthia Gao

Jeff Girard

Martin Gleize

Sahir Gomez

Hongyu Gong

Srivathsan Govindarajan

Brandon Han

Sen He

Denise Hernandez

Yordan Hristov

Rongjie Huang

Hirofumi Inaguma

Somya Jain

Raj Janardhan

Qingyao Jia

Christopher Klaiber

Dejan Kovachev

Moneish Kumar

Hang Li

Yilei Li

Pavel Litvin

Wei Liu

Guangyao Ma

Jing Ma

Martin Ma

Xutai Ma

Lucas Mantovani

Sagar Miglani

Sreyas Mohan

Louis-Philippe Morency

Evonne Ng

Kam-Woh Ng

Tu Anh Nguyen

Amia Oberai

Benjamin Peloquin

Juan Pino

Jovan Popovic

Omid Poursaeed

Fabian Prada

Alice Rakotoarison

Alexander Richard

Christophe Ropers

Safiyyah Saleem

Vasu Sharma

Alex Shcherbyna

Jie Shen

Anastasis Stathopoulos

Anna Sun

Paden Tomasello

Tuan Tran

Arina Turkatenko

Bo Wan

Chao Wang

Jeff Wang

Mary Williamson

Carleigh Wood

Tao Xiang

Yilin Yang

Zhiyuan Yao

Chen Zhang

Jiemin Zhang

Xinyue Zhang

Jason Zheng

Pavlo Zhyzheria

Jan Zikes

Michael Zollhoefer

Publisher

arXiv

Related Publications

May 14, 2025

HUMAN & MACHINE INTELLIGENCE

SPEECH & AUDIO

Emergence of Language in the Developing Brain

Linnea Evanson, Christine Bulteau, Mathilde Chipaux, Georg Dorfmüller, Sarah Ferrand-Sorbets, Emmanuel Raffo, Sarah Rosenberg, Pierre Bourdillon, Jean Remi King

May 14, 2025

May 13, 2025

HUMAN & MACHINE INTELLIGENCE

RESEARCH

Dynadiff: Single-stage Decoding of Images from Continuously Evolving fMRI

Marlène Careil, Yohann Benchetrit, Jean-Rémi King

May 13, 2025

April 17, 2025

HUMAN & MACHINE INTELLIGENCE

CONVERSATIONAL AI

Collaborative Reasoner: Self-improving Social Agents with Synthetic Conversations

Ansong Ni, Ruta Desai, Yang Li, Xinjie Lei, Dong Wang, Ramya Raghavendra, Gargi Ghosh, Daniel Li (FAIR), Asli Celikyilmaz

April 17, 2025

December 12, 2024

HUMAN & MACHINE INTELLIGENCE

NLP

Explore Theory-of-Mind: Program-Guided Adversarial Data Generation for Theory of Mind Reasoning

Melanie Sclar, Jane Yu, Maryam Fazel-Zarandi, Yulia Tsvetkov, Yonatan Bisk, Yejin Choi, Asli Celikyilmaz

December 12, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.