Federated Learning with Partial Model Personalization

July 13, 2022


We consider two federated learning algorithms for training partially personalized models, where the shared and personal parameters are updated either simultaneously or alternately on the devices. Both algorithms have been proposed in the literature, but their convergence properties are not fully understood, especially for the alternating variant. We provide convergence analyses of both algorithms in the general non-convex setting with partial participation and delineate the regime where one dominates the other. Our experiments on real- world image, text, and speech datasets demonstrate that (a) partial personalization can obtain most of the benefits of full model personalization with a small fraction of personal parameters, and, (b) the alternating update algorithm outperforms the simultaneous update algorithm by a small but consistent margin.

Download the Paper


Written by

Lin Xiao

Abdelrahman Mohamed

Kshitiz Malik

Maziar Sanjabi

Mike Rabbat

Krishna Pilllutla


ICML (International Conference on Machine Learning)

Research Topics

Core Machine Learning

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.