April 14, 2020
We present an end-to-end system for real-time environment capture, 3D reconstruction, and stereoscopic view synthesis on a mobile VR headset. Our solution allows the user to use the cameras on their VR headset as their eyes to see and interact with the real world while still wearing their headset, a feature often referred to as Passthrough. The central challenge when building such a system is the choice and implementation of algorithms under the strict compute, power, and performance constraints imposed by the target user experience and mobile platform. A key contribution of this paper is a complete description of a corresponding system that performs temporally stable passthrough rendering at 72 Hz with only 200 mW power consumption on a mobile Snapdragon 835 platform. Our algorithmic contributions for enabling this performance include the computation of a coarse 3D scene proxy on the embedded video encoding hardware, followed by a depth densification and filtering step, and finally stereoscopic texturing and spatio-temporal up-sampling. We provide a detailed discussion and evaluation of the challenges we encountered, and algorithm and performance trade-offs in terms of compute and resulting passthrough quality. The described system is available to users as the Passthrough+ feature on Oculus Quest. We believe that by publishing the underlying system and methods, we provide valuable insights to the community on how to design and implement real-time environment sensing on heavily resource constrained hardware.
Written by
Gaurav Chaurasia
Arthur Nieuwoudt
Alexandru-Eugen Ichim
Richard Szeliski
Alexander Sorkine-Hornung
Publisher
Association for Computing Machinery (ACM)
February 27, 2026
Yifu Qiu, Holger Schwenk, Paul-Ambroise Duquenne
February 27, 2026
February 11, 2026
Leon Liangyu Chen, Haoyu Ma, Ziqi Huang, Xiaoliang Dai, Jialiang Wang, Zecheng He, Jianwei Yang, Chunyuan Li, Serena Yeung-Levy, Animesh Sinha, Chu Wang, Felix Juefei-Xu, Junzhe Sun, Zhipeng Fan
February 11, 2026
December 18, 2025
Alexandre Mourachko, Hady Elsahar, Pierre Fernandez, Sylvestre Rebuffi, Tom Sander, Tomáš Souček, Tuan Tran, Valeriu Lacatusu
December 18, 2025
November 19, 2025
Ronghang Hu, Peize Sun, Triantafyllos Afouras, Effrosyni Mavroudi, Katherine Xu, Tsung-Han Wu, Yu Zhou, Liliane Momeni, Shuangrui Ding, Sagar Vaze, Francois Porcher, Feng Li, Siyuan Li, Aishwarya Kamath, Ho Kei Cheng, Andrew Huang, Arpit Kalla, Baishan Guo, Chaitanya Ryali, Christoph Feichtenhofer, Didac Suris Coll-Vinent, Haitham Khedr, Jie Lei, Joseph Greer, Kalyan Vasudev Alwala, Kate Saenko, Laura Gustafson, Markus Marks, Meng Wang, Nicolas Carion, Nikhila Ravi, Pengchuan Zhang, Piotr Dollar, Rishi Hazra, Roman Rädle, Shoubhik Debnath, Tengyu Ma, Yuan-Ting Hu
November 19, 2025
June 11, 2019
Yuandong Tian, Jerry Ma, Qucheng Gong, Shubho Sengupta, Zhuoyuan Chen, James Pinkerton, Larry Zitnick
June 11, 2019
April 30, 2018
Zhilin Yang, Saizheng Zhang, Jack Urbanek, Will Feng, Alexander H. Miller, Arthur Szlam, Douwe Kiela, Jason Weston
April 30, 2018
October 10, 2016
Matthijs Douze, Hervé Jégou, Florent Perronnin
October 10, 2016
June 18, 2018
Matthijs Douze, Arthur Szlam, Bharath Hariharan, Hervé Jégou
June 18, 2018

Our approach
Latest news
Foundational models