|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- DrNicefellow/CHAT-ALL-IN-ONE-v1 |
|
--- |
|
# ChatAllInOne_Mixtral-8x7B-v1 |
|
|
|
## Description |
|
ChatAllInOne_Mixtral-8x7B-v1 is a chat language model fine-tuned on the CHAT-ALL-IN-ONE-v1 dataset using the QLoRA technique. Originally based on the [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) model, this version is specifically optimized for diverse and comprehensive chat applications. |
|
|
|
## Model Details |
|
- **Base Model**: [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) |
|
- **Fine-tuning Technique**: QLoRA |
|
- **Dataset**: [CHAT-ALL-IN-ONE-v1](https://huggingface.co/datasets/DrNicefellow/CHAT-ALL-IN-ONE-v1) |
|
- **Tool Used for Fine-tuning**: [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) |
|
|
|
## Features |
|
- Enhanced understanding and generation of conversational language. |
|
- Improved performance in diverse chat scenarios, including casual, formal, and domain-specific conversations. |
|
- Fine-tuned to maintain context and coherence over longer dialogues. |
|
|
|
## Prompt Format |
|
|
|
Vicuna 1.1 |
|
|
|
See the finetuning dataset for examples. |
|
|
|
|
|
## License |
|
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details. |
|
|
|
## Discord Server |
|
|
|
Join our Discord server [here](https://discord.gg/xhcBDEM3). |
|
|
|
|
|
## Feeling Generous? π |
|
Eager to buy me a cup of 2$ coffe or iced tea?π΅β Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink? |