Fix tokenizer_config.json

#9
by Xenova HF staff - opened

Without this fix, if you do:

from transformers import pipeline
pipeline('text-generation', 'JackFram/llama-160m')

you get an error:

RecursionError: maximum recursion depth exceeded while calling a Python object

See this issue for more info.

JackFram changed pull request status to merged

Thx for the fix!

@JackFram Happy to help :) PS: this model has the same problem which you can fix in the same way.

Gotcha, just get it fixed!

Sign up or log in to comment