January 21, 2020
A major update to Facebook AI’s open source AI Habitat platform , which enables significantly faster training of embodied AI agents in a variety of photorealistic 3D virtual environments. AI Habitat now supports interactive objects, realistic physics modeling, improved rendering, seamless transfer from virtual to physical environments, and a more flexible user interface with in-browser support for running simulations.
With these enhancements, researchers can use AI Habitat to train and test how agents not only move around in but also interact with photorealistic virtual environments and the objects they contain.
When we released AI Habitat last year, it offered compatibility with Facebook Reality Labs’ Replica simulations, one of the most photorealistic 3D reconstructions of environments available; a flexible and modular design; and highly efficient training that enables rendering 10,000 frames per second on a single GPU.
In addition to numerous tool additions and performance improvements, today’s release builds on these previously existing features in significant ways:
Researchers can now import objects from a library (e.g. household objects from the YCB dataset or furniture models) and perform programmatic scene construction with instructions such as “Add a chair here”.
Habitat now offers support for rigid body physics via the Bullet physics engine, for example to, apply forces/torques or check for collisions.
Researchers can now run the same code in AI Habitat and on a physical robot (such as a LoCoBot) using the Habitat-PyRobot-Bridge. This includes realistic noise models for LoCoBot actuators and depth sensors. (More details are available in this paper.)
Habitat now runs in a browser. By running AI Habitat in a browser with WebGL and a JavaScript API, researchers can easily compare agents’ performance with that of real people.
Habitat now offers TensorBoard support and has an improved API interface for Habitat baselines.
AI Habitat now supports HDR textures in Replica environments and offers preliminary support for Oculus Quest VR.
A new embodied question-answering task has been added to Habitat-API.
By teaching agents in virtual worlds, researchers can make much faster progress on tasks necessary to build better AI assistants and robots that operate more intelligently in complex situations in the physical world. For this training to be most effective, these agents must not only move through virtual environments, but also push, pull, and manipulate objects in those spaces. AI Habitat now makes it easy to do this efficiently and then benchmark results and compare performance across different datasets. These improvements in Habitat will accelerate and simplify the use of virtual environments to develop smarter and more capable agents.
Research Engineer
Software Engineer
AI Research Intern
Software Engineer
Research Engineer
Research Scientist
AI Resident
Research Engineering Manager
Foundational models
Latest news
Foundational models