Update README.md
Browse files
README.md
CHANGED
@@ -9,24 +9,17 @@ license: apache-2.0
|
|
9 |
pipeline_tag: text-generation
|
10 |
---
|
11 |
|
12 |
-
#
|
13 |
-
Welcome to the
|
14 |
|
15 |
# Introduction
|
16 |
-
The
|
17 |
|
18 |
# Features
|
19 |
- Improved Conversational Flow: Generates more natural and engaging responses.
|
20 |
- Context Awareness: Maintains context over multiple interactions.
|
21 |
-
- Efficient: Utilizes a 4-bit quantized model for better performance with reduced computational resources.
|
22 |
- Customizable: Can be further fine-tuned for specific applications or industries.
|
23 |
|
24 |
-
# Model Details
|
25 |
-
- Base Model: unsloth/llama-2-7b-chat-bnb-4bit
|
26 |
-
- Fine-tuning Dataset: piotr25691/ultrachat-200k-alpaca
|
27 |
-
- Parameters: 7 billion
|
28 |
-
- Quantization: 4-bit, 16-bit, 32-bit
|
29 |
-
|
30 |
# Dataset
|
31 |
The model was fine-tuned on the piotr25691/ultrachat-200k-alpaca dataset, which consists of 200,000 high-quality conversational pairs. This dataset helps the model to understand and generate more nuanced and contextually appropriate responses.
|
32 |
|
|
|
9 |
pipeline_tag: text-generation
|
10 |
---
|
11 |
|
12 |
+
# Xander Conversational Model
|
13 |
+
Welcome to the Xander Conversational Model repository! This model has been fine-tuned from the unsloth/llama-2-7b-chat-bnb-4bit base on the piotr25691/ultrachat-200k-alpaca dataset to enhance its conversational abilities. It is designed to provide more natural, engaging, and contextually aware responses.
|
14 |
|
15 |
# Introduction
|
16 |
+
The Xander Conversational Model is an advanced NLP model aimed at improving interactive text generation. By leveraging the strengths of unsloth/llama-2-7b-chat-bnb-4bit and fine-tuning it with the extensive piotr25691/ultrachat-200k-alpaca dataset, the model is adept at generating coherent and contextually relevant conversations.
|
17 |
|
18 |
# Features
|
19 |
- Improved Conversational Flow: Generates more natural and engaging responses.
|
20 |
- Context Awareness: Maintains context over multiple interactions.
|
|
|
21 |
- Customizable: Can be further fine-tuned for specific applications or industries.
|
22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
# Dataset
|
24 |
The model was fine-tuned on the piotr25691/ultrachat-200k-alpaca dataset, which consists of 200,000 high-quality conversational pairs. This dataset helps the model to understand and generate more nuanced and contextually appropriate responses.
|
25 |
|