Omartificial-Intelligence-Space commited on
Commit
b22f8b9
·
verified ·
1 Parent(s): 6fa12ef

Update readme.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md CHANGED
@@ -141,18 +141,62 @@ The following results of the **Arabic-QwQ** and **QwQ-Preivew** models were anal
141
 
142
  1. An example illustrating how base models generate Chinese responses when provided with an Arabic question:
143
 
 
144
 
 
 
145
 
 
146
 
 
147
 
 
 
148
 
 
149
 
 
150
 
151
 
 
152
 
 
153
 
154
 
 
155
 
 
156
 
 
 
157
 
 
 
 
 
 
 
 
 
 
 
 
158
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
141
 
142
  1. An example illustrating how base models generate Chinese responses when provided with an Arabic question:
143
 
144
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/628f7a71dd993507cfcbe587/WheOO0ZgoCkwwKQ55kLsS.png)
145
 
146
+ > [!NOTE]
147
+ > The fine-tuned model effectively resolves the issue of unintended Chinese interference in the responses, delivering clear and accurate answers in Arabic.
148
 
149
+ 2. An example demonstrating how base models sometimes respond in English unless explicitly instructed to answer in Arabic. In contrast, the fine-tuned model seamlessly responds in Arabic without requiring additional instructions, simply by providing the question in Arabic.
150
 
151
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/628f7a71dd993507cfcbe587/GFvctZu9rN7SHgv8BQUjR.png)
152
 
153
+ > [!NOTE]
154
+ > Although the base model provides the correct answer, it is often in English, making it challenging for Arabic users to understand unless they are proficient in English.
155
 
156
+ 3. At times, the base models provide additional context, resulting in unnecessarily lengthy answers. The fine-tuned model addresses this issue by focusing on delivering concise, straightforward solutions without extra context.
157
 
158
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/628f7a71dd993507cfcbe587/IX4oUhVZyAo4aiJQ9WQa1.png)
159
 
160
 
161
+ 4. There are instances where both the base and fine-tuned models perform well in answering the query, demonstrating their capability to comprehend and provide accurate responses. However, the fine-tuned model consistently outperforms by aligning more closely with the requirements of Arabic users.
162
 
163
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/628f7a71dd993507cfcbe587/apKq-ip2XestSEQacunof.png)
164
 
165
 
166
+ ## How to Use
167
 
168
+ To utilize the Arabic-QwQ model effectively:
169
 
170
+ 1. **Use Unsloth for Faster Inference**
171
+ We recommend using [Unsloth](https://github.com/unslothai/unsloth) to load and perform inference with the model. This method is optimized for speed and offers better performance compared to traditional loading methods.
172
 
173
+ 2. **Incorporate Prompt Templates for Structured Instructions**
174
+ For more specific instructions or complex tasks, use a prompt template to guide the model's responses. For example, structure your input like:
175
+ ```plaintext
176
+ prompt = """Below is an instruction that describes a task. Write a response that appropriately completes the request.
177
+
178
+ ### Instruction:
179
+ {}
180
+
181
+ ### Response:
182
+ {}"""
183
+ ```
184
 
185
+ ## Acknowledgments
186
+
187
+ We would like to express our gratitude to Prince Sultan University for their support in the development and fine-tuning of this model. Their contributions were invaluable in making this work possible.
188
+
189
+ ## Citation
190
+
191
+ If you use this model in your research or application, please cite it as follows:
192
+
193
+ ```plaintext
194
+ @misc{Arabic_QWQ,
195
+ author = {Omer Nacar},
196
+ title = {Arabic-QwQ: Fine-tuned QwQ LLM for Arabic Reasoning and Understanding},
197
+ year = {2024},
198
+ url = {https://huggingface.co/Omartificial-Intelligence-Space/Arabic-QWQ-32B-Preview},
199
+ institution = {Prince Sultan University},
200
+ note = {Fine-tuned version of the QwQ-32B model for Arabic-specific tasks.}
201
+ }
202
+ ```