|
--- |
|
language: ja |
|
license: MIT |
|
datasets: |
|
- mC4 Japanese |
|
--- |
|
|
|
# transformers-ud-japanese-electra-ginza (sudachitra-wordpiece, mC4 Japanese) |
|
|
|
This is an [ELECTRA](https://github.com/google-research/electra) model pretrained on approximately 200M Japanese sentences. |
|
|
|
The input text is tokenized by [SudachiTra](https://github.com/WorksApplications/SudachiTra) with the WordPiece subword tokenizer. |
|
See `tokenizer_config.json` for the setting details. |
|
|
|
## Model architecture |
|
|
|
The model architecture is the same as the original ELECTRA base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads. |
|
|
|
## Training Data |
|
|
|
This model is trained on the Japanese texts extracted from the [mC4](https://huggingface.co/datasets/mc4) Common Crawl's multilingual web crawl corpus. |
|
We used the [Sudachi](https://github.com/WorksApplications/Sudachi) to split texts into sentences, and also applied a simple rule-based filter to remove nonlinguistic segments of mC4 multilingual corpus. |
|
The extracted texts contains over 600M sentences in total, and we used approximately 200M sentences for pretraining. |
|
|
|
## Licenses |
|
|
|
The pretrained models are distributed under the terms of the [MIT License](https://opensource.org/licenses/mit-license.php). |
|
|
|
## Citations |
|
|
|
- mC4 |
|
|
|
Contains information from `mC4` which is made available under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/). |
|
``` |
|
@article{2019t5, |
|
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, |
|
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, |
|
journal = {arXiv e-prints}, |
|
year = {2019}, |
|
archivePrefix = {arXiv}, |
|
eprint = {1910.10683}, |
|
} |
|
``` |