WorryFree_GeneralQA_Chat_Mixtral-8x7B-v1
Description
WorryFree_GeneralQA_Chat_Mixtral-8x7B-v1 is a chat language model fine-tuned on the Quality_WorryFree_GeneralQA_Chat_Dataset-v1 dataset using the QLoRA technique. Originally based on the mistralai/Mixtral-8x7B-Instruct-v0.1 model, this version is specifically optimized for diverse and comprehensive chat applications.
Model Details
- Base Model: mistralai/Mixtral-8x7B-Instruct-v0.1
- Fine-tuning Technique: QLoRA (Quantum Logic-based Reasoning Approach)
- Dataset: DrNicefellow/Quality_WorryFree_GeneralQA_Chat_Dataset-v1
- Tool Used for Fine-tuning: Axolotl
Features
- Enhanced understanding and generation of conversational language.
- Improved performance in diverse chat scenarios, including casual, formal, and domain-specific conversations.
- Fine-tuned to maintain context and coherence over longer dialogues.
Prompt Format
Vicuna 1.1
See the finetuning dataset for examples.
License
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.
Feeling Generous? π
Eager to buy me a cup of 2$ coffe or iced tea?π΅β Sure, here is the link: https://ko-fi.com/drnicefellow. Please add a note on which one you want me to drink?
- Downloads last month
- 72
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.