Papers
arxiv:1909.11942

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

Published on Sep 26, 2019
Authors:
,
,
,
,
,

Abstract

Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations and longer training times. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and \squad benchmarks while having fewer parameters compared to BERT-large. The code and the pretrained models are available at https://github.com/google-research/ALBERT.

Community

Sign up or log in to comment

Models citing this paper 42

Browse 42 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1909.11942 in a dataset README.md to link it from this page.

Spaces citing this paper 51

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.