Text2Text Generation
Transformers
PyTorch
Portuguese
t5
text-generation-inference
Inference Endpoints
File size: 1,409 Bytes
61265e9
 
 
 
 
 
 
b6e0242
61265e9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
datasets:
- allenai/c4
- legacy-datasets/mc4
language:
- pt
pipeline_tag: text2text-generation
base_model: google-t5/t5-3b
---

# ptt5-v2-3b

## Introduction
[ptt5-v2 models](https://huggingface.co/collections/unicamp-dl/ptt5-v2-666538a650188ba00aa8d2d0) are pretrained T5 models tailored for the Portuguese language, continuing from Google's original checkpoints with sizes from t5-small to t5-3B.
These checkpoints were used to train MonoT5 rerankers for the Portuguese language, which can be found in their  [HuggingFace collection](https://huggingface.co/collections/unicamp-dl/monoptt5-66653981877df3ea727f720d).
For further information about the pretraining process, please refer to our paper, [ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language](https://arxiv.org/abs/2008.09144).

## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration

tokenizer = T5Tokenizer.from_pretrained("unicamp-dl/ptt5-v2-3b")
model = T5ForConditionalGeneration.from_pretrained("unicamp-dl/ptt5-v2-3b")
```

## Citation
If you use our models, please cite:

    @article{ptt5_2020,
      title={PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data},
      author={Carmo, Diedre and Piau, Marcos and Campiotti, Israel and Nogueira, Rodrigo and Lotufo, Roberto},
      journal={arXiv preprint arXiv:2008.09144},
      year={2020}
    }