Douts
#1
by
visionop19
- opened
Hey altaf I am chandan Cr I need to use your model in an hackathon I need to connect with you how can I ?
you can see the details to use at the model card it self. used quantized version of the model if you lack hardware reseources.
Sorry for the trouble, I am chandan Cr from banglore studying in 2nd year Ai and ML, I am new to this field and this is my first hackathon can u help me how I can use quantazied model
import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer
path = "Mohammed-Altaf/medical_chatbot-8bit"
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = GPT2Tokenizer.from_pretrained(path)
model = GPT2LMHeadModel.from_pretrained(path).to(device)
prompt_input = (
"The conversation between human and AI assistant.\n"
"[|Human|] {input}\n"
"[|AI|]"
)
sentence = prompt_input.format_map({'input': "what is parkinson's disease?"})
inputs = tokenizer(sentence, return_tensors="pt").to(device)
with torch.no_grad():
beam_output = model.generate(**inputs,
min_new_tokens=1,
max_length=512,
num_beams=3,
repetition_penalty=1.2,
early_stopping=True,
eos_token_id=198
)
print(tokenizer.decode(beam_output[0], skip_special_tokens=True))
- using above code you can use the quantized model just keep changing the
sentence
variable to change the input from "what is parkinsons disease" to anythig you want or take the input from the user and add it there that's it. - Convert the above code into a function and return the decoded value from the tokenizer rather than printing it,
- that should solve your problem
Is there any social media where I can contact youuu
I am getting error while reading your json file on line 1 itself expecting a eof ','