File size: 318 Bytes
e7c222f d7a8bcc c23675e |
1 2 3 |
[llama2-7b-megacode2_min100](https://huggingface.co/andreaskoepf/llama2-7b-megacode2_min100) converted and quantized to GGML\
had to use a "[added_tokens.json](https://huggingface.co/andreaskoepf/llama2-7b-oasst-baseline/blob/main/added_tokens.json)" from another of their models, as the vocab size is strangely 32007 |