YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

GGUF format files of the model vinai/PhoGPT-4B-Chat.

This model file is compatible with the latest llama.cpp

Context: I was trying to get PhoGPT to work with llama-cpp and llama-cpp-python. I found nguyenviet/PhoGPT-4B-Chat-GGUF but cannot get it to work:

from llama_cpp import Llama

llm = Llama.from_pretrained(
    repo_id="nguyenviet/PhoGPT-4B-Chat-GGUF",
    filename="*q3_k_m.gguf*",
)

...
llama_model_load: error loading model: done_getting_tensors: wrong number of tensors; expected 388, got 387
llama_load_model_from_file: failed to load model
...

After my opening issue at the PhoGPT repo was resolved, I was able to create the gguf file.

I figure people want to try the model in Colab. So here it is, so you don't have to create it yourself

Downloads last month
8
GGUF
Model size
3.69B params
Architecture
mpt
Inference API
Unable to determine this model's library. Check the docs .