Text2Text Generation
Transformers
PyTorch
Portuguese
t5
text-generation-inference
Inference Endpoints
ptt5-v2-3b / README.md
marcospiau's picture
Update README.md
b6e0242 verified
|
raw
history blame
1.41 kB
metadata
datasets:
  - allenai/c4
  - legacy-datasets/mc4
language:
  - pt
pipeline_tag: text2text-generation
base_model: google-t5/t5-3b

ptt5-v2-3b

Introduction

ptt5-v2 models are pretrained T5 models tailored for the Portuguese language, continuing from Google's original checkpoints with sizes from t5-small to t5-3B. These checkpoints were used to train MonoT5 rerankers for the Portuguese language, which can be found in their HuggingFace collection. For further information about the pretraining process, please refer to our paper, ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language.

Usage

from transformers import T5Tokenizer, T5ForConditionalGeneration

tokenizer = T5Tokenizer.from_pretrained("unicamp-dl/ptt5-v2-3b")
model = T5ForConditionalGeneration.from_pretrained("unicamp-dl/ptt5-v2-3b")

Citation

If you use our models, please cite:

@article{ptt5_2020,
  title={PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data},
  author={Carmo, Diedre and Piau, Marcos and Campiotti, Israel and Nogueira, Rodrigo and Lotufo, Roberto},
  journal={arXiv preprint arXiv:2008.09144},
  year={2020}
}