Error when trying to start server

#1
by antderosa - opened

After running the command:

./server -m models/"$MODEL_FILE" -c 8192

I get the following error:

{"timestamp":1698024323,"level":"INFO","function":"main","line":1324,"message":"build info","build":1408,"commit":"22c69a2"}
{"timestamp":1698024323,"level":"INFO","function":"main","line":1330,"message":"system info","n_threads":8,"n_threads_batch":-1,"total_threads":10,"system_info":"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | "}
gguf_init_from_file: invalid magic characters .
error loading model: llama_model_loader: failed to load model from models/

llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'models/'
{"timestamp":1698024323,"level":"ERROR","function":"loadModel","line":267,"message":"unable to load model","model":"models/"}

@antderosa
If you want to use the fine-tuned model, you have to get a downloadable link for the model in HF and update it accordingly in the shell script. For example:

MODEL_URL="https://huggingface.co/mzbac/mistral-grammar/resolve/main/Mistral-7B-grammar-checker-v1.1.Q5_K_M.gguf"
MODEL_FILE="Mistral-7B-grammar-checker-v1.1.Q5_K_M.gguf"

Sign up or log in to comment