Edit model card

ChatAllInOne_Mistral7BV1

Description

ChatAllInOne_Mistral7BV1 is a chat language model fine-tuned on the CHAT-ALL-IN-ONE-v1 dataset using the QLoRA technique with the unsloth tool. Originally based on the unsloth/mistral-7b model, this version is specifically optimized for diverse and comprehensive chat applications.

Model Details

Features

  • Enhanced understanding and generation of conversational language.
  • Improved performance in diverse chat scenarios, including casual, formal, and domain-specific conversations.
  • Fine-tuned to maintain context and coherence over longer dialogues.

Prompt Format

Vicuna 1.1

See the finetuning dataset for examples.

License

This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.

Feeling Generous? 😊

Eager to buy me a cup of 2$ coffe or iced tea?πŸ΅β˜• Sure, here is the link: https://ko-fi.com/drnicefellow. Please add a note on which one you want me to drink?

Downloads last month
3
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference API
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Dataset used to train DrNicefellow/ChatAllInOne-Mistral-7B-V1

Collection including DrNicefellow/ChatAllInOne-Mistral-7B-V1