T5-spanish-nsp
T5-spanish-nsp is fine-tuned for Next Sentence Prediction task on the wikipedia dataset using flax-community/spanish-t5-small model. It was introduced in this paper and first released on this page.
Model description
T5-spanish-nsp is a Transformer-based model which was fine-tuned for Next Sentence Prediction task on 20000 Spanish Wikipedia articles.
Intended uses
- Apply Next Sentence Prediction tasks. (compare the results with BERT models since BERT natively supports this task)
- See how to fine-tune a T5 model using our code
- Check our paper to see its results
How to use
You can use this model directly with a pipeline for next sentence prediction. Here is how to use this model in PyTorch:
Necessary Initialization
import torch
from transformers import T5ForConditionalGeneration, AutoTokenizer
from huggingface_hub import hf_hub_download
class ModelNSP(torch.nn.Module):
def __init__(self, pretrained_model, tokenizer, nsp_dim=300):
super(ModelNSP, self).__init__()
self.zero_token, self.one_token = (self.find_label_encoding(x, tokenizer).item() for x in ["0", "1"])
self.core_model = T5ForConditionalGeneration.from_pretrained(pretrained_model)
self.nsp_head = torch.nn.Sequential(torch.nn.Linear(self.core_model.config.hidden_size, nsp_dim),
torch.nn.Linear(nsp_dim, nsp_dim), torch.nn.Linear(nsp_dim, 2))
def forward(self, input_ids, attention_mask=None):
outputs = self.core_model.generate(input_ids=input_ids, attention_mask=attention_mask, max_length=3,
output_scores=True, return_dict_in_generate=True)
logits = [torch.Tensor([score[self.zero_token], score[self.one_token]]) for score in outputs.scores[1]]
return torch.stack(logits).softmax(dim=-1)
@staticmethod
def find_label_encoding(input_str, tokenizer):
encoded_str = tokenizer.encode(input_str, add_special_tokens=False, return_tensors="pt")
return (torch.index_select(encoded_str, 1, torch.tensor([1])) if encoded_str.size(dim=1) == 2 else encoded_str)
tokenizer = AutoTokenizer.from_pretrained("tolga-ozturk/t5-french-nsp")
model = torch.nn.DataParallel(ModelNSP("plguillou/t5-base-fr-sum-cnndm", tokenizer).eval())
model.load_state_dict(torch.load(hf_hub_download(repo_id="tolga-ozturk/t5-french-nsp", filename="model_weights.bin")))
Inference
batch_texts = [("clasificación binaria: En Italia, la pizza se presenta sin rebanar.", "El cielo es azul."),
("clasificación binaria: En Italia, la pizza se presenta sin rebanar.", "Sin embargo, se sirve en rodajas en Turquía.")]
encoded_dict = tokenizer.batch_encode_plus(batch_text_or_text_pairs=batch_texts, truncation="longest_first", padding=True, return_tensors="pt", return_attention_mask=True, max_length=256)
print(torch.argmax(model(encoded_dict.input_ids, attention_mask=encoded_dict.attention_mask), dim=-1))
Training Metrics
BibTeX entry and citation info
@misc{title={How Different Is Stereotypical Bias Across Languages?},
author={Ibrahim Tolga Öztürk and Rostislav Nedelchev and Christian Heumann and Esteban Garces Arias and Marius Roger and Bernd Bischl and Matthias Aßenmacher},
year={2023},
eprint={2307.07331},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
The work is done with Ludwig-Maximilians-Universität Statistics group, don't forget to check out their huggingface page for other interesting works!
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.