that kind trolling?
Ha, got you :) But seriously, the reason is that the repo was converted with broken rope config (and was converted only on request by the creator, knowing it would be broken), and once llama.cpp was fixed, I deleted it to re-do it. Look at it from the bright side, I saved you from having to redownload the whole quant :)
PS: your download should not get stuck - maybe a less broken download tool might pay off in the long term.
https://huggingface.co/aifeifei799/Llama-3.1-8B-Instruct-Fei-v1-Uncensored/tree/main/Uncensored_Test
Uncensored Test
pip install datasets openai
start you openai Server,change Uncensored_Test/harmful_behaviors.py client to you Openai Server address and api_key
Point to the local server
change Uncensored_Test/harmful_behaviors.py client to you Openai Server address and api_key
client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")
The test consisted of 520 questions, and there was only one question that couldn't be answered, resulting in a 99.80769230769231% response rate. The test was so extensive that it made my eyes hurt.