The Meta Synthetic Environments (MSE) Lidar Dataset is the first-of-its-kind large-scale single-photon lidar dataset, built on top of Aria Synthetic Environments (ASE) and intended to unlock new machine learning capabilities for single-photon lidars.

Single-photon lidars – composed of a pulsed laser and a single photon avalanche diode (SPAD) sensor – shoot light pulses into the scene and measure the time light takes to return to the sensor. The sensor captures a histogram of light intensity over time at every pixel, called a transient. While existing lidar datasets typically only provide sparse point clouds, our dataset includes the full lidar transients, which contain information about dense depth, occluded geometry, and material properties of the scene. For more information, please refer to our paper.
The Meta Synthetic Environments Lidar Dataset contains close to 100,000 renderings of synthetic indoor scenes from ASE. Data from each scene is packed into a separate tarball containing:

Each rendered viewpoint corresponds to a view from the ASE dataset, meaning all assets from ASE can be used in conjunction with the Meta Synthetic Environments Lidar Dataset. More information on using the dataset and code for parsing the dataset is provided in the Shoot-Bounce-3D code repository.
* The tarball filename is the ASE scene ID, whereas the sample id in ids.txt indicates which sample (or viewpoint) from that scene the renders correspond to, allowing all corresponding data from ASE to be found.
If you use this dataset, please cite the following paper:
@inproceedings{ShootBounce3D,
author = {Klinghoffer, Tzofi and
Somasundaram, Siddharth and
Xiang, Xiaoyu and
Fan, Yuchen and
Richardt, Christian and
Dave, Akshat and
Raskar, Ramesh and
Ranjan, Rakesh},
title = {{Shoot-Bounce-3D}: Single-Shot Occlusion-Aware
{3D} from Lidar by Decomposing Two-Bounce Light},
booktitle = {SIGGRAPH Asia},
year = {2025},
url = {https://shoot-bounce-3d.github.io},
}Our approach
Latest news
Foundational models