Papers
arxiv:2306.02320

Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models

Published on Jun 4, 2023
Authors:
,
,
,
,
,
,
,

Abstract

Parameter-efficient tuning (PET) methods can effectively drive extremely large pre-trained language models (PLMs) by only training minimal parameters. Different PET methods utilize different manually designed modules. In a small PLM, there are usually noticeable performance differences among PET methods. Nevertheless, when a PLM's scale grows up to tens of billions of parameters, all PET methods achieve almost the same performance and even perform on par with the full-parameter fine-tuning method. Hence, we hypothesize that model scaling can mitigate the design differences (the module structures and the number of trainable parameters) among PET methods. To study this hypothesis, we introduce a more flexible PET method - arbitrary PET (APET) method - to be compatible with arbitrary module structures and any number of trainable parameters. Then, we experiment on 11 NLP tasks of 5 types and 2 representative PLMs. From our investigations, we find that the model scaling (1) mitigates the effects of the arbitrary module structure on the performance of tuning methods, and (2) enables the tuning methods to optimize fewer parameters to achieve the full-parameter fine-tuning performance. Intriguingly, we also observe that all tuning methods require almost the same number of trainable parameters to drive PLMs. We discuss this phenomenon and the above two findings collectively from optimization perspectives to fathom the mechanisms behind them. These conclusions not only demonstrate the positive impact of model scaling on tuning methods but disclose its mechanisms, which help us design more effective and efficient tuning methods on larger-scale PLMs.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2306.02320 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2306.02320 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2306.02320 in a Space README.md to link it from this page.

Collections including this paper 1