File size: 300 Bytes
8645fdb
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
---
base_model: core-3/kuno-royale-v2-7b
inference: false
license: cc-by-nc-4.0
model_creator: core-3
model_name: kuno-royale-v2-7b
model_type: mistral
quantized_by: core-3
---

## kuno-royale-v2-7b-GGUF

Some GGUF quants of [core-3/kuno-royale-v2-7b](https://huggingface.co/core-3/kuno-royale-v2-7b)