MahmoudIbrahim
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Model: FalconMasr
|
2 |
|
3 |
This model is based on the Falcon-7B model with quantization in 4-bit format for efficient memory usage and fine-tuned using LoRA (Low-Rank Adaptation) for Arabic causal language modeling tasks. The model has been configured to handle causal language modeling tasks specifically designed to improve responses in Arabic.
|
@@ -111,4 +123,4 @@ output = model.generate(**inputs, max_length=200,
|
|
111 |
|
112 |
# Decode the generated output
|
113 |
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
|
114 |
-
print(decoded_output)
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- Omartificial-Intelligence-Space/Arabic-finanical-rag-embedding-dataset
|
4 |
+
language:
|
5 |
+
- ar
|
6 |
+
base_model:
|
7 |
+
- ybelkada/falcon-7b-sharded-bf16
|
8 |
+
pipeline_tag: text-generation
|
9 |
+
library_name: transformers
|
10 |
+
tags:
|
11 |
+
- finance
|
12 |
+
---
|
13 |
# Model: FalconMasr
|
14 |
|
15 |
This model is based on the Falcon-7B model with quantization in 4-bit format for efficient memory usage and fine-tuned using LoRA (Low-Rank Adaptation) for Arabic causal language modeling tasks. The model has been configured to handle causal language modeling tasks specifically designed to improve responses in Arabic.
|
|
|
123 |
|
124 |
# Decode the generated output
|
125 |
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
|
126 |
+
print(decoded_output)
|