metadata
license: mit
datasets:
- daily_dialog
language:
- en
Useless ChitChat Language Model
Basic Dialog Model from DialoGPT-small. Finetuned on Daily Dialog dataset.
How to use
Use it as any torch python Language Model
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("gpt")
model = AutoModelForCausalLM.from_pretrained("jinymusim/dialogmodel")
# Take user Input
user_utterance = input('USER> ')
user_utterance = user_utterance.strip()
tokenized_context = tokenizer.encode(user_utterance + tokenizer.eos_token, return_tensors='pt')
# generated a response, limit max_lenght to resonable size
out_response = model.generate(tokenized_context,
max_length=30,
num_beams=2,
no_repeat_ngram_size=2,
early_stopping=True,
pad_token_id=self.tokenizer.eos_token_id)
# Truncate User Input
decoded_response = self.tokenizer.decode(out_response[0], skip_special_tokens=True)[len(user_utterance):]
print(f'SYSTEM> {decoded_response}')