requires Pro

#57
by MatrixIA - opened

getting this message when doing inference
Server meta-llama/Meta-Llama-3-70B-Instruct does not seem to support chat completion. Error: Model requires a Pro subscription; check out hf.co/pricing to learn more. Make sure to include your HF token in your query.

just yesterday i was testing and it seemed to work (serverless API)

I have the same problem too.

Token is valid (permission: fineGrained).
Your token has been saved in your configured git credential helpers (manager).
Your token has been saved to C:\Users\Administrator.cache\huggingface\token
Login successful
Server https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct/v1/chat/completions does not seem to support chat completion. Falling back to text generation. Error: (Request ID: k3O10Rur0ON-c9TLe0lGa)

Bad request:
Authorization header is correct, but the token seems invalid
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Roaming\Python\Python312\site-packages\huggingface_hub\utils_errors.py", line 304, in hf_raise_for_status
response.raise_for_status()
File "C:\ProgramData\miniconda3\envs\cuda_env\Lib\site-packages\requests\models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct/v1/chat/completions

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Roaming\Python\Python312\site-packages\huggingface_hub\inference_client.py", line 706, in chat_completion
data = self.post(
^^^^^^^^^^
File "C:\Users\Administrator\AppData\Roaming\Python\Python312\site-packages\huggingface_hub\inference_client.py", line 273, in post
hf_raise_for_status(response)
File "C:\Users\Administrator\AppData\Roaming\Python\Python312\site-packages\huggingface_hub\utils_errors.py", line 358, in hf_raise_for_status
raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError: (Request ID: k3O10Rur0ON-c9TLe0lGa)

Bad request:
Authorization header is correct, but the token seems invalid

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Roaming\Python\Python312\site-packages\huggingface_hub\utils_errors.py", line 304, in hf_raise_for_status
response.raise_for_status()
File "C:\ProgramData\miniconda3\envs\cuda_env\Lib\site-packages\requests\models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "E:\Python\T5\T5Agent.py", line 37, in
response_content = llm_engine(messages)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Roaming\Python\Python312\site-packages\transformers\agents\llm_engine.py", line 85, in call
response = self.client.chat_completion(messages, stop=stop_sequences, max_tokens=1500)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Roaming\Python\Python312\site-packages\huggingface_hub\inference_client.py", line 738, in chat_completion
return self.chat_completion(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Roaming\Python\Python312\site-packages\huggingface_hub\inference_client.py", line 770, in chat_completion
text_generation_output = self.text_generation(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Roaming\Python\Python312\site-packages\huggingface_hub\inference_client.py", line 2061, in text_generation
raise_text_generation_error(e)
File "C:\Users\Administrator\AppData\Roaming\Python\Python312\site-packages\huggingface_hub\inference_common.py", line 460, in raise_text_generation_error
raise http_error
File "C:\Users\Administrator\AppData\Roaming\Python\Python312\site-packages\huggingface_hub\inference_client.py", line 2032, in text_generation
bytes_output = self.post(json=payload, model=model, task="text-generation", stream=stream) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Roaming\Python\Python312\site-packages\huggingface_hub\inference_client.py", line 273, in post
hf_raise_for_status(response)
File "C:\Users\Administrator\AppData\Roaming\Python\Python312\site-packages\huggingface_hub\utils_errors.py", line 358, in hf_raise_for_status
raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError: (Request ID: vUBetFgYdy5jSneTSQgy4)

Bad request:
Authorization header is correct, but the token seems invalid

is this for running the model on the API, or is there a subscription required to run it locally?

@MatrixIA since we started serving this model on our Serverless API, it has been only available to PRO users: I advise you to get the subscription, it's very convenient! 🤗

i get this error even after i purchase pro membership which is $9 per month. Do i need to get different license for Serverless API?

Sign up or log in to comment