KoichiYasuoka's picture
initial release
fa67bf7
|
raw
history blame
787 Bytes
metadata
language:
  - zh
tags:
  - chinese
  - masked-lm
  - wikipedia
license: cc-by-sa-4.0
pipeline_tag: fill-mask
mask_token: '[MASK]'

roberta-base-chinese

Model Description

This is a RoBERTa model pre-trained on Chinese Wikipedia texts (both simplified and traditional). You can fine-tune roberta-base-chinese for downstream tasks, such as POS-tagging, dependency-parsing, and so on.

How to Use

from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-chinese")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-chinese")