runtime error

gguf_init_from_file: invalid magic characters 'tjgg' llama_model_load: error loading model: llama_model_loader: failed to load model from ./models/llama-2-13b-chat.bin llama_load_model_from_file: failed to load model AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "/app/llama_cpp/server/__main__.py", line 88, in <module> main() File "/app/llama_cpp/server/__main__.py", line 74, in main app = create_app( ^^^^^^^^^^^ File "/app/llama_cpp/server/app.py", line 133, in create_app set_llama_proxy(model_settings=model_settings) File "/app/llama_cpp/server/app.py", line 70, in set_llama_proxy _llama_proxy = LlamaProxy(models=model_settings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/llama_cpp/server/model.py", line 29, in __init__ self._current_model = self.load_llama_from_model_settings( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/llama_cpp/server/model.py", line 112, in load_llama_from_model_settings _model = llama_cpp.Llama( ^^^^^^^^^^^^^^^^ File "/app/llama_cpp/llama.py", line 318, in __init__ self._n_vocab = self.n_vocab() ^^^^^^^^^^^^^^ File "/app/llama_cpp/llama.py", line 1657, in n_vocab return self._model.n_vocab() ^^^^^^^^^^^^^^^^^^^^^ File "/app/llama_cpp/_internals.py", line 67, in n_vocab assert self.model is not None AssertionError

Container logs:

Fetching error logs...