bezir commited on
Commit
8b4cddd
·
verified ·
1 Parent(s): 3c5fc86

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -4
README.md CHANGED
@@ -80,10 +80,14 @@ model-index:
80
  verified: false
81
  ---
82
 
83
- <img src="https://huggingface.co/WiroAI/gemma-2-9b-it-tr/blob/main/wiro_logo.png"/>
 
 
 
84
 
85
 
86
- # 🌟 Meet with WiroAI/gemma-2-9b-it-tr! A robust language model with more Turkish language and culture support! 🌟
 
87
 
88
  ## 🌟 Key Features
89
 
@@ -134,15 +138,16 @@ Be aware that benchmarks below
134
 
135
  | Models | MMLU TR | TruthfulQA TR | ARC TR | HellaSwag TR | GSM8K TR | WinoGrande TR | Average |
136
  |-----------------------------------------------------------|:-------:|:-------------:|:------:|:------------:|:--------:|:-------------:|:-------:|
137
- | WiroAI/gemma-2-9b-it-tr | 59.8 | 49.9 | 53.7 | 57.0 | 66.8 | 60.6 | 58.0 |
138
  | selimc/OrpoGemma-2-9B-TR | 53.0 | 54.3 | 52.4 | 52.0 | 64.8 | 58.9 | 55.9 |
139
  | Metin/Gemma-2-9b-it-TR-DPO-V1 | 51.3 | 54.7 | 52.6 | 51.2 | 67.1 | 55.2 | 55.4 |
140
  | CohereForAI/aya-expanse-8b | 52.3 | 52.8 | 49.3 | 56.7 | 61.3 | 59.2 | 55.3 |
141
  | ytu-ce-cosmos/Turkish-Llama-8b-DPO-v0.1 | 52.0 | 57.6 | 51.0 | 53.0 | 59.8 | 58.0 | 55.2 |
142
  | google/gemma-2-9b-it | 51.8 | 53.0 | 52.2 | 51.5 | 63.0 | 56.2 | 54.6 |
143
  | Eurdem/Defne-llama3.1-8B | 52.9 | 51.2 | 47.1 | 51.6 | 59.9 | 57.5 | 53.4 |
 
144
  | meta-llama/Meta-Llama-3-8B-Instruct | 52.2 | 49.2 | 44.2 | 49.2 | 56.0 | 56.7 | 51.3 |
145
- | WiroAI/Llama-3.1-8b-instruct-tr | 52.2 | 49.2 | 44.2 | 49.2 | 56.0 | 56.7 | 51.3 |
146
 
147
  Models Benchmarks are tested with
148
  ```python
 
80
  verified: false
81
  ---
82
 
83
+ <div align="center">
84
+ <img src="https://huggingface.co/WiroAI/gemma-2-9b-it-tr/resolve/main/wiro_logo.png"
85
+ alt="Wiro AI Logo" width="256"/>
86
+ </div>
87
 
88
 
89
+
90
+ # 🚀 Meet with WiroAI/gemma-2-9b-it-tr! A robust language model with more Turkish language and culture support! 🚀
91
 
92
  ## 🌟 Key Features
93
 
 
138
 
139
  | Models | MMLU TR | TruthfulQA TR | ARC TR | HellaSwag TR | GSM8K TR | WinoGrande TR | Average |
140
  |-----------------------------------------------------------|:-------:|:-------------:|:------:|:------------:|:--------:|:-------------:|:-------:|
141
+ | **WiroAI/gemma-2-9b-it-tr** | **59.8** | 49.9 | **53.7** | **57.0** | 66.8 | **60.6** | **58.0** |
142
  | selimc/OrpoGemma-2-9B-TR | 53.0 | 54.3 | 52.4 | 52.0 | 64.8 | 58.9 | 55.9 |
143
  | Metin/Gemma-2-9b-it-TR-DPO-V1 | 51.3 | 54.7 | 52.6 | 51.2 | 67.1 | 55.2 | 55.4 |
144
  | CohereForAI/aya-expanse-8b | 52.3 | 52.8 | 49.3 | 56.7 | 61.3 | 59.2 | 55.3 |
145
  | ytu-ce-cosmos/Turkish-Llama-8b-DPO-v0.1 | 52.0 | 57.6 | 51.0 | 53.0 | 59.8 | 58.0 | 55.2 |
146
  | google/gemma-2-9b-it | 51.8 | 53.0 | 52.2 | 51.5 | 63.0 | 56.2 | 54.6 |
147
  | Eurdem/Defne-llama3.1-8B | 52.9 | 51.2 | 47.1 | 51.6 | 59.9 | 57.5 | 53.4 |
148
+ | **WiroAI/Llama-3.1-8b-instruct-tr** | 52.4 | 49.5 | 50.1 | 54 | 57.5 | 57.0 | 53.4 |
149
  | meta-llama/Meta-Llama-3-8B-Instruct | 52.2 | 49.2 | 44.2 | 49.2 | 56.0 | 56.7 | 51.3 |
150
+
151
 
152
  Models Benchmarks are tested with
153
  ```python