Core Machine Learning

Systems Research

Learn-to-Share: A Hardware-friendly Transfer Learning Framework Exploiting Computation and Parameter Sharing

June 30, 2021

Abstract

Task-specific fine-tuning on pre-trained transformers has achieved performance breakthroughs in multiple NLP tasks. Yet, as both computation and parameter size grows linearly with the number of sub-tasks, it is increasingly difficult to adopt such methods to the real world due to unrealistic memory and computation overhead on computing devices. Previous works on fine-tuning focus on reducing the growing parameter size to save storage cost by parameter sharing. However, com- pared to storage, the constraint of computation is a more critical issue with the fine-tuning models in modern computing environments. In this work, we propose LeTS, a framework that leverages both computation and parameter sharing across multiple tasks. Compared to traditional fine-tuning, LeTS proposes a novel neural architecture that contains a fixed pre-trained transformer model, plus learnable additive components for sub-tasks. The learnable components reuse the intermediate activations in the fixed pre-trained model, decoupling computation dependency. Differentiable neural architecture search is used to determine a task-specific computation sharing scheme, and a novel early stage pruning is applied to additive components for sparsity to achieve parameter sharing. Extensive experiments show that with 1.4% of extra parameters per task, LeTS reduces the computation by 49.5% on GLUE benchmarks with only 0.2% accuracy loss compared to full fine-tuning.

Download the Paper

AUTHORS

Written by

Cheng Fu

Hanxian Huang

Xinyun Chen

Yuandong Tian

Jishen Zhao

Publisher

ICML 2021

Research Topics

Core Machine Learning

Systems Research

Related Publications

August 08, 2022

Core Machine Learning

Opacus: User-Friendly Differential Privacy Library in PyTorch

Ashkan Yousefpour, Akash Bharadwaj, Alex Sablayrolles, Graham Cormode, Igor Shilov, Ilya Mironov, Jessica Zhao, John Nguyen, Karthik Prasad, Mani Malek, Sayan Ghosh

August 08, 2022

December 06, 2018

Systems Research

Rethinking floating point for deep learning

Jeff Johnson

December 06, 2018

June 22, 2015

Systems Research

NLP

Fast Convolutional Nets With fbfft: A GPU Performance Evaluation | Facebook AI Research

Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, Yann LeCun

June 22, 2015

December 07, 2018

Systems Research

Rethinking floating point for deep learning | Facebook AI Research

Jeff Johnson

December 07, 2018

March 02, 2020

Systems Research

Federated Optimization in Heterogenous Networks | Facebook AI Research

Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith

March 02, 2020

September 01, 2020

Systems Research

ResiliNet: Failure-Resilient Inference in Distributed Neural Networks

Ashkan Yousefpour, Brian Q. Nguyen, Siddartha Devic, Guanhua Wang, Aboudy Kreidieh, Hans Lobel, Alexandre M. Bayen, Jason P. Jue

September 01, 2020

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.