ONNX is an open format for representing deep learning models, allowing AI developers to easily move models between state-of-the-art tools and choose the best combination. ONNX accelerates the process from research to production by enabling interoperability across popular tools including PyTorch, Caffe2, Microsoft Cognitive Toolkit, Apache MXNet, and more.
ONNX enables models to be trained in one framework, and then exported and deployed into other frameworks for inference. ONNX models are currently supported in frameworks such as PyTorch, Caffe2, Microsoft Cognitive Toolkit, Apache MXNet and Chainer with additional support for Core ML, TensorFlow, Qualcomm SNPE, Nvidia's TensorRT and Intel's nGraph.
Any tool that exports ONNX models can benefit from ONNX-compatible runtimes and libraries designed to maximize performance. ONNX currently supports Qualcomm SNPE, AMD, ARM, Intel and other hardware partners.
Install ONNX from binaries using pip or conda, or build from source.
Review documentation and tutorials to familiarize yourself with ONNX's functionality and advanced features.
Follow the importing and exporting directions for the frameworks you're using to get started.
Explore and try out the community's models in the ONNX model zoo.
Foundational models
Latest news
Foundational models