Edit model card

t5_interpreter

A rut5-based model for incomplete utterance restoration, spellchecking and text normalization for dialogue utterances.

Read more about the task here.

Usage example

import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer

model_name = 'inkoziev/t5_interpreter'
tokenizer = T5Tokenizer.from_pretrained(model_name,)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = T5ForConditionalGeneration.from_pretrained(model_name)
model.eval()

t5_input = '- Тебя как зовут?\n- Мальвина #'
input_ids = tokenizer(t5_input, return_tensors='pt').input_ids
out_ids = model.generate(input_ids=input_ids, max_length=40, eos_token_id=tokenizer.eos_token_id, early_stopping=True)
t5_output = tokenizer.decode(out_ids[0][1:])
print(t5_output)
Downloads last month
20
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.