WhoTookMyAmogusNickname's picture
Update README.md
708f877
|
raw
history blame
317 Bytes
[llama2-7b-megacode2_min100](https://huggingface.co/andreaskoepf/llama2-7b-megacode2_min100) converted and quantized to GGML
had to use a "added_tokens.json" from another of their [models](https://huggingface.co/andreaskoepf/llama2-7b-oasst-baseline/blob/main/added_tokens.json), as the vocab size is strangely 32007