File size: 317 Bytes
e7c222f |
1 2 3 |
(llama2-7b-megacode2_min100)[https://huggingface.co/andreaskoepf/llama2-7b-megacode2_min100] converted and quantized to GGML
had to use a "added_tokens.json" from another of their (models)[https://huggingface.co/andreaskoepf/llama2-7b-oasst-baseline/blob/main/added_tokens.json], as the vocab size is strangely 32007 |