base_model: core-3/kuno-royale-v2-7b | |
inference: false | |
license: cc-by-nc-4.0 | |
model_creator: core-3 | |
model_name: kuno-royale-v2-7b | |
model_type: mistral | |
quantized_by: core-3 | |
## kuno-royale-v2-7b-GGUF | |
Some GGUF quants of [core-3/kuno-royale-v2-7b](https://huggingface.co/core-3/kuno-royale-v2-7b) |