Update README.md
Browse files
README.md
CHANGED
@@ -9,14 +9,14 @@ alt="Turkcell LLM" width="300"/>
|
|
9 |
|
10 |
# Turkcell-LLM-7b-v1
|
11 |
|
12 |
-
This model is an extended version of a Mistral-based Large Language Model (LLM) for Turkish. It was trained on a cleaned Turkish raw dataset containing 5 billion tokens. The training process involved using the DORA method
|
13 |
|
14 |
## Model Details
|
15 |
|
16 |
- **Base Model**: Mistral 7B based LLM
|
17 |
- **Tokenizer Extension**: Specifically extended for Turkish
|
18 |
- **Training Dataset**: Cleaned Turkish raw data with 5 billion tokens
|
19 |
-
- **Training Method**: Initially with DORA, followed by fine-tuning with LORA
|
20 |
|
21 |
### DORA Configuration
|
22 |
|
|
|
9 |
|
10 |
# Turkcell-LLM-7b-v1
|
11 |
|
12 |
+
This model is an extended version of a Mistral-based Large Language Model (LLM) for Turkish. It was trained on a cleaned Turkish raw dataset containing 5 billion tokens. The training process involved using the DORA method initially. Following this, we utilized Turkish instruction sets created from various open-source and internal resources for fine-tuning with the LORA method.
|
13 |
|
14 |
## Model Details
|
15 |
|
16 |
- **Base Model**: Mistral 7B based LLM
|
17 |
- **Tokenizer Extension**: Specifically extended for Turkish
|
18 |
- **Training Dataset**: Cleaned Turkish raw data with 5 billion tokens
|
19 |
+
- **Training Method**: Initially with DORA, followed by fine-tuning with LORA using custom Turkish instruction sets
|
20 |
|
21 |
### DORA Configuration
|
22 |
|