--- pipeline_tag: text-generation license: apache-2.0 --- This is 01-ai's [Yi-6B-200K](https://huggingface.co/01-ai/Yi-6B-200K), converted to GGUF without quantization. No other changes were made. The model was converted using `convert.py` from Georgi Gerganov's llama.cpp repo as it appears [here](https://github.com/ggerganov/llama.cpp/blob/898aeca90a9bb992f506234cf3b8b7f7fa28a1df/convert.py) (that is, the last change to the file was in commit `#898aeca`.) All credit belongs to [01-ai](https://huggingface.co/01-ai) for training and releasing this model. Thank you!