A Survey on Efficient Federated Learning Methods for Foundation Model Training
Abstract
<PRE_TAG><PRE_TAG><PRE_TAG><PRE_TAG><PRE_TAG>Federated Learning (FL)</POST_TAG></POST_TAG></POST_TAG></POST_TAG></POST_TAG> has become an established technique to facilitate <PRE_TAG>privacy-preserving collaborative training</POST_TAG>. However, new approaches to FL often discuss their contributions involving small deep-learning models only. With the tremendous success of <PRE_TAG><PRE_TAG><PRE_TAG><PRE_TAG><PRE_TAG><PRE_TAG>transformer models</POST_TAG></POST_TAG></POST_TAG></POST_TAG></POST_TAG></POST_TAG>, the following question arises: What is necessary to operationalize <PRE_TAG>foundation models</POST_TAG> in an FL application? Knowing that computation and communication often take up similar amounts of time in FL, we introduce a novel taxonomy focused on computational and communication efficiency methods in FL applications. This said, these methods aim to optimize the training time and reduce communication between clients and the server. We also look at the current state of widely used <PRE_TAG><PRE_TAG><PRE_TAG><PRE_TAG>FL frameworks</POST_TAG></POST_TAG></POST_TAG></POST_TAG> and discuss future research potentials based on existing approaches in FL research and beyond.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper