August 08, 2022
We introduce Opacus, a free, open-source PyTorch library for training deep learning models with differential privacy (hosted at opacus.ai). Opacus is designed for simplicity, flexibility, and speed. It provides a simple and user-friendly API, and enables machine learning practitioners to make a training pipeline private by adding as little as two lines to their code. It supports a wide variety of layers, including multi-head attention, convolution, LSTM, GRU (and generic RNN), and embedding, right out of the box and provides the means for supporting other user-defined layers. Opacus computes batched per-sample gradients, providing higher efficiency compared to the traditional “micro batch” approach. In this paper we present Opacus, detail the principles that drove its implementation and unique features, and benchmark it against other frameworks for training models with differential privacy as well as standard PyTorch.
Written by
Ashkan Yousefpour
Akash Bharadwaj
Alex Sablayrolles
Graham Cormode
Igor Shilov
Jessica Zhao
Mani Malek
Sayan Ghosh
Publisher
Privacy in Machine Learning Workshop, in conjunction with NeurIPS
May 07, 2024
Hwanwoo Kim, Xin Zhang, Jiwei Zhao, Qinglong Tian
May 07, 2024
April 04, 2024
Jonathan Lebensold, Maziar Sanjabi, Pietro Astolfi, Adriana Romero Soriano, Kamalika Chaudhuri, Mike Rabbat, Chuan Guo
April 04, 2024
March 28, 2024
Vitoria Barin Pacela, Kartik Ahuja, Simon Lacoste-Julien, Pascal Vincent
March 28, 2024
March 13, 2024
Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, Yuandong Tian
March 13, 2024
Product experiences
Foundational models
Product experiences
Latest news
Foundational models