license: apache-2.0 | |
This is a 6.0bpw quantized version of [DrNicefellow/ChatAllInOne-Yi-34B-200K-V1](https://huggingface.co/DrNicefellow/ChatAllInOne-Yi-34B-200K-V1) made with [exllamav2](https://github.com/turboderp/exllamav2). | |
## Model Details | |
- **Base Model**: [01-ai/Yi-34B-200K](https://huggingface.co/01-ai/Yi-34B-200K) | |
- **Fine-tuning Technique**: QLoRA (Quantum Logic-based Reasoning Approach) | |
- **Dataset**: [CHAT-ALL-IN-ONE-v1](https://huggingface.co/datasets/DrNicefellow/CHAT-ALL-IN-ONE-v1) | |
- **Tool Used for Fine-tuning**: [unsloth](https://github.com/unslothai/unsloth) | |
## Features | |
- Enhanced understanding and generation of conversational language. | |
- Improved performance in diverse chat scenarios, including casual, formal, and domain-specific conversations. | |
- Fine-tuned to maintain context and coherence over longer dialogues. | |
## Prompt Format | |
Vicuna 1.1 | |
See the finetuning dataset for examples. | |
## License | |
This model is open-sourced under the [Yi License](https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE). | |
## Feeling Generous? π | |
Eager to buy me a cup of 2$ coffe or iced tea?π΅β Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink? |