Unable to decode text I get this error.
#14
by
aryan1107
- opened
AttributeError: 'BloomTokenizerFast' object has no attribute 'tokenizer'
My code is something like this:
from transformers import AutoTokenizer, BloomForCausalLM
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-1b3")
model = BloomForCausalLM.from_pretrained("bigscience/bloom-1b3")
prompt = "Today I believe we can finally"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generate up to 30 tokens
outputs = model.generate(input_ids, do_sample=False, max_length=30)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
Hi
@aryan1107
!
Thanks for your message,
I just tried this script:
from transformers import AutoTokenizer, BloomForCausalLM
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-1b3")
model = BloomForCausalLM.from_pretrained("bigscience/bloom-1b3")
prompt = "Today I believe we can finally"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(input_ids, do_sample=False, max_length=30)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
and seems to work fine on my side, what version of transformers are you using?
Thank you it's working I had made a typo. my bad
aryan1107
changed discussion status to
closed