Update README.md
Browse files
README.md
CHANGED
@@ -87,7 +87,7 @@ model-index:
|
|
87 |
|
88 |
|
89 |
|
90 |
-
# 🚀 Meet with WiroAI/
|
91 |
|
92 |
## 🌟 Key Features
|
93 |
|
@@ -96,7 +96,7 @@ Adapted to Turkish culture and local context
|
|
96 |
Built on Google's cutting-edge Gemma architecture
|
97 |
|
98 |
📝 Model Details
|
99 |
-
|
100 |
|
101 |
## 🔧 Technical Specifications
|
102 |
|
@@ -139,14 +139,14 @@ While the model demonstrates high performance in Turkish language tasks, users s
|
|
139 |
|
140 |
| Models | MMLU TR | TruthfulQA TR | ARC TR | HellaSwag TR | GSM8K TR | WinoGrande TR | Average |
|
141 |
|-----------------------------------------------------------|:-------:|:-------------:|:------:|:------------:|:--------:|:-------------:|:-------:|
|
142 |
-
| **WiroAI/
|
143 |
| selimc/OrpoGemma-2-9B-TR | 53.0 | 54.3 | 52.4 | 52.0 | 64.8 | 58.9 | 55.9 |
|
144 |
| Metin/Gemma-2-9b-it-TR-DPO-V1 | 51.3 | 54.7 | 52.6 | 51.2 | 67.1 | 55.2 | 55.4 |
|
145 |
| CohereForAI/aya-expanse-8b | 52.3 | 52.8 | 49.3 | 56.7 | 61.3 | 59.2 | 55.3 |
|
146 |
| ytu-ce-cosmos/Turkish-Llama-8b-DPO-v0.1 | 52.0 | 57.6 | 51.0 | 53.0 | 59.8 | 58.0 | 55.2 |
|
147 |
| google/gemma-2-9b-it | 51.8 | 53.0 | 52.2 | 51.5 | 63.0 | 56.2 | 54.6 |
|
148 |
| Eurdem/Defne-llama3.1-8B | 52.9 | 51.2 | 47.1 | 51.6 | 59.9 | 57.5 | 53.4 |
|
149 |
-
| **WiroAI/
|
150 |
| meta-llama/Meta-Llama-3-8B-Instruct | 52.2 | 49.2 | 44.2 | 49.2 | 56.0 | 56.7 | 51.3 |
|
151 |
|
152 |
|
|
|
87 |
|
88 |
|
89 |
|
90 |
+
# 🚀 Meet with WiroAI/wiroai-turkish-llm-9b! A robust language model with more Turkish language and culture support! 🚀
|
91 |
|
92 |
## 🌟 Key Features
|
93 |
|
|
|
96 |
Built on Google's cutting-edge Gemma architecture
|
97 |
|
98 |
📝 Model Details
|
99 |
+
The model is the Turkish-speaking member of Google's innovative Gemma model family. This model has been trained using Supervised Fine-Tuning (SFT) on carefully curated high-quality Turkish instructions. Leveraging the foundations of Gemini technology, this model demonstrates superior performance in Turkish language processing tasks.
|
100 |
|
101 |
## 🔧 Technical Specifications
|
102 |
|
|
|
139 |
|
140 |
| Models | MMLU TR | TruthfulQA TR | ARC TR | HellaSwag TR | GSM8K TR | WinoGrande TR | Average |
|
141 |
|-----------------------------------------------------------|:-------:|:-------------:|:------:|:------------:|:--------:|:-------------:|:-------:|
|
142 |
+
| **WiroAI/wiroai-turkish-llm-9b** | **59.8** | 49.9 | **53.7** | **57.0** | 66.8 | **60.6** | **58.0** |
|
143 |
| selimc/OrpoGemma-2-9B-TR | 53.0 | 54.3 | 52.4 | 52.0 | 64.8 | 58.9 | 55.9 |
|
144 |
| Metin/Gemma-2-9b-it-TR-DPO-V1 | 51.3 | 54.7 | 52.6 | 51.2 | 67.1 | 55.2 | 55.4 |
|
145 |
| CohereForAI/aya-expanse-8b | 52.3 | 52.8 | 49.3 | 56.7 | 61.3 | 59.2 | 55.3 |
|
146 |
| ytu-ce-cosmos/Turkish-Llama-8b-DPO-v0.1 | 52.0 | 57.6 | 51.0 | 53.0 | 59.8 | 58.0 | 55.2 |
|
147 |
| google/gemma-2-9b-it | 51.8 | 53.0 | 52.2 | 51.5 | 63.0 | 56.2 | 54.6 |
|
148 |
| Eurdem/Defne-llama3.1-8B | 52.9 | 51.2 | 47.1 | 51.6 | 59.9 | 57.5 | 53.4 |
|
149 |
+
| **WiroAI/wiroai-turkish-llm-8b** | 52.4 | 49.5 | 50.1 | 54 | 57.5 | 57.0 | 53.4 |
|
150 |
| meta-llama/Meta-Llama-3-8B-Instruct | 52.2 | 49.2 | 44.2 | 49.2 | 56.0 | 56.7 | 51.3 |
|
151 |
|
152 |
|