Error

#31
by trungnd7112004 - opened

when I copy these code and run on google colab I face this error although I have been already granted access to this mode:
' from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b")

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))'

my error:
OSError: You are trying to access a gated repo.
Make sure to have access to it at https://huggingface.co/google/gemma-7b.
401 Client Error. (Request ID: Root=1-65d760f0-43f07f3e4879c9120fbd1f40;8029bc91-0115-4f00-8aac-97662fced07f)

Cannot access gated repo for url https://huggingface.co/google/gemma-7b/resolve/main/config.json.
Repo model google/gemma-7b is gated. You must be authenticated to access it.

I'm also getting this error
CleanShot 2024-02-22 at 15.54.04@2x.png

I solved it, first you have to create your 'new token' on your huggingface, then copy these code to run:
" access_token = 'your token'
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b", token = access_token)
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", token = access_token)"

Google org

Hi all! Another option is to use huggingface_hub login methods as per https://huggingface.co/docs/huggingface_hub/main/en/quick-start#authentication

osanseviero changed discussion status to closed

First, go to main page and proceed with the license agreement. Then follow the instructions above, or below!
import os
os.environ["HF_TOKEN"] = 'insert your token'

I have done all the above but it didnt work. I fixed it by going into files and versions and accepting the licence.

I have done all the above but it didnt work. I fixed it by going into files and versions and accepting the licence.

thank you.

I am facing the same error ! Can please tell how to accept the license and make the inference api work ?

Why useing gemma? use llama 3 or even phi!

If you still want to use this you just follow the instructions mentioned earlier, like this:

I have done all the above but it didnt work. I fixed it by going into files and versions and accepting the licence.

1、make sure you have access the model license
2、make sure your token 'read access' permission

微信图片_20240703101540.png
Edit Access Token Permissions->Repos->read

then the all the above method work.
I download it through git lfs finnally

huggingface-cli login
git lfs clone https://huggingface.co/google/gemma-2-27b

微信图片_20240703101803.png

Also getting the same error trying to use the model from Colab with this line:
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b", token=userdata.get('HF_WRITE'))

The response is:
Cannot access gated repo for url https://huggingface.co/google/gemma-2b/resolve/main/config.json.
Access to model google/gemma-2b is restricted. You must be authenticated to access it. ...

Any idea what is wrong?

image.png

Hi, click log in and follow the steps. Thats it

Go to model page

https://huggingface.co/google/gemma-2b-it

Click on Accept License

Once process is done you should see this

image.png

Sign up or log in to comment