metadata
license: apache-2.0
datasets:
- DrNicefellow/CHAT-ALL-IN-ONE-v1
ChatAllInOne_Mixtral-8x7B-v1
Description
ChatAllInOne_Mixtral-8x7B-v1 is a chat language model fine-tuned on the CHAT-ALL-IN-ONE-v1 dataset using the QLoRA technique. Originally based on the mistralai/Mixtral-8x7B-Instruct-v0.1 model, this version is specifically optimized for diverse and comprehensive chat applications.
Model Details
- Base Model: mistralai/Mixtral-8x7B-Instruct-v0.1
- Fine-tuning Technique: QLoRA (Quantum Logic-based Reasoning Approach)
- Dataset: CHAT-ALL-IN-ONE-v1
- Tool Used for Fine-tuning: Axolotl
Features
- Enhanced understanding and generation of conversational language.
- Improved performance in diverse chat scenarios, including casual, formal, and domain-specific conversations.
- Fine-tuned to maintain context and coherence over longer dialogues.
Prompt Format
Vicuna 1.1
See the finetuning dataset for examples.
License
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.