llama2-7b-megacode2_min100 converted and quantized to GGML
had to use a "added_tokens.json" from another of their models, as the vocab size is strangely 32007
llama2-7b-megacode2_min100 converted and quantized to GGML
had to use a "added_tokens.json" from another of their models, as the vocab size is strangely 32007