Papers
arxiv:2110.07038

Towards Efficient NLP: A Standard Evaluation and A Strong Baseline

Published on Oct 13, 2021
Authors:
,
,
,
,
,
,
,
,
,

Abstract

Supersized pre-trained language models have pushed the accuracy of various natural language processing (NLP) tasks to a new state-of-the-art (SOTA). Rather than pursuing the reachless SOTA accuracy, more and more researchers start paying attention on model efficiency and usability. Different from accuracy, the metric for efficiency varies across different studies, making them hard to be fairly compared. To that end, this work presents ELUE (Efficient Language Understanding Evaluation), a standard evaluation, and a public leaderboard for efficient NLP models. ELUE is dedicated to depict the Pareto Frontier for various language understanding tasks, such that it can tell whether and how much a method achieves Pareto improvement. Along with the benchmark, we also release a strong baseline, Elastic<PRE_TAG>BERT</POST_TAG>, which allows BERT to exit at any layer in both static and dynamic ways. We demonstrate the Elastic<PRE_TAG>BERT</POST_TAG>, despite its simplicity, outperforms or performs on par with SOTA compressed and early exiting models. With Elastic<PRE_TAG>BERT</POST_TAG>, the proposed ELUE has a strong Pareto Frontier and makes a better evaluation for efficient NLP models.

Community

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2110.07038 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2110.07038 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.