JAN 01, 2025

Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction

[Update] UnCommon Objects in 3D (uCO3D) is a dataset of crowd-sourced object-centric videos designed for training and benchmarking deep learning models aiding tasks like 3D generation and reconstruction. The dataset contains 170k videos from 1k LVIS categories, each annotated with 3D-Gaussian-Splat reconstruction, camera poses, object segmentation, scene-centric captions, depth maps, and high-quality SfM-estimated point clouds. uCO3D has been used to successfully train large 3D networks including LRM, CAT3D, and Instant3D, yielding superior performance compared to previous datasets. The dataset was introduced in our CVPR 2025 Paper.

JULY 22, 2021

Common Objects in 3D (CO3D) is a dataset designed for learning category-specific 3D reconstruction and new-view synthesis using multi-view images of common object categories. The dataset has been introduced in our ICCV 2021 Paper

Something Went Wrong
We're having trouble playing this video.

Overview

Learning to reconstruct the 3D structure of object categories has mainly been explored using only synthetic datasets due to the unavailability of real data. CO3D facilitates advances in this field by providing a large-scale dataset composed of real multi-view images of object categories annotated with camera poses and ground-truth 3D point clouds.

The CO3D dataset contains a total of 1.5 million frames from nearly 19,000 videos capturing objects from 50 MS-COCO categories. As such, it surpasses alternatives in terms of both the number of categories and objects. The dataset is suitable for learning category-specific 3D reconstruction and new-view synthesis methods, such as the seminal NeRF.

Something Went Wrong
We're having trouble playing this video.

Dataset Statistics

  • Videos: 18,619

  • Categories: 50 MS-COCO

  • Camera-annotated frames: 1.5 million

  • Point-cloud-annotated videos: 5,625

Something Went Wrong
We're having trouble playing this video.

Getting started

1

Download the dataset here.

2

3

Read the README.md that describes how to visualize and evaluate on the dataset.