May 31, 2021
Aerial vehicles are revolutionizing the way film-makers can capture shots of actors by composing novel aerial and dynamic viewpoints. However, despite great advancements in autonomous flight technology, generating expressive camera behaviors is still a challenge and requires non-technical users to edit a large number of unintuitive control parameters. In this work, we develop a data-driven framework that enables editing of these complex camera positioning parameters in a semantic space (e.g. calm, enjoyable, establishing). First, we generate a database of video clips with a diverse range of shots in a photo-realistic simulator, and use hundreds of participants in a crowd-sourcing framework to obtain scores for a set of semantic descriptors for each clip. Next, we analyze correlations between descriptors and build a semantic control space based on cinematography guidelines and human perception studies. Finally, we learn a generative model that can map a set of desired semantic video descriptors into low-level camera trajectory parameters. We evaluate our system by demonstrating that our model successfully generates shots that are rated by participants as having the expected degrees of expression for each descriptor. We also show that our models generalize to different scenes in both simulation and real-world experiments. Data and video found at: https://sites.google.com/view/robotcam.
Written by
Rogerio Bonatti
Arthur Bucker
Sebastian Scherer
Mustafa Mukadam
Jessica Hodgins
Publisher
ICRA 2021
Research Topics
December 05, 2020
Deepak Pathak, Abhinav Gupta, Mustafa Mukadam, Shikhar Bahl
December 05, 2020
June 23, 2020
Tanmay Shankar, Abhinav Gupta
June 23, 2020
August 13, 2020
Tanmay Shankar, Abhinav Gupta
August 13, 2020
Foundational models
Latest news
Foundational models