Papers
arxiv:2004.13922

Revisiting Pre-Trained Models for Chinese Natural Language Processing

Published on Apr 29, 2020
Authors:
,
,
,
,
,

Abstract

Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks, and consecutive variants have been proposed to further improve the performance of the pre-trained language models. In this paper, we target on revisiting Chinese pre-trained language models to examine their effectiveness in a non-English language and release the Chinese pre-trained language model series to the community. We also propose a simple but effective model called MacBERT, which improves upon RoBERTa in several ways, especially the masking strategy that adopts MLM as correction (Mac). We carried out extensive experiments on eight Chinese NLP tasks to revisit the existing pre-trained language models as well as the proposed MacBERT. Experimental results show that MacBERT could achieve state-of-the-art performances on many NLP tasks, and we also ablate details with several findings that may help future research. Resources available: https://github.com/ymcui/MacBERT

Community

Sign up or log in to comment

Models citing this paper 42

Browse 42 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2004.13922 in a dataset README.md to link it from this page.

Spaces citing this paper 259

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.