MaziyarPanahi commited on
Commit
4e3255c
1 Parent(s): ada71cb

Update README.md (#16)

Browse files

- Update README.md (e44879e9fac376a9630820306a76f4b0b17c0ff9)

Files changed (1) hide show
  1. README.md +19 -19
README.md CHANGED
@@ -23,7 +23,7 @@ inference: false
23
  model_creator: MaziyarPanahi
24
  quantized_by: MaziyarPanahi
25
  model-index:
26
- - name: Llama-3-70B-Instruct-DPO-v0.2
27
  results:
28
  - task:
29
  type: text-generation
@@ -40,7 +40,7 @@ model-index:
40
  value: 72.53
41
  name: normalized accuracy
42
  source:
43
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2
44
  name: Open LLM Leaderboard
45
  - task:
46
  type: text-generation
@@ -56,7 +56,7 @@ model-index:
56
  value: 86.22
57
  name: normalized accuracy
58
  source:
59
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2
60
  name: Open LLM Leaderboard
61
  - task:
62
  type: text-generation
@@ -73,7 +73,7 @@ model-index:
73
  value: 80.41
74
  name: accuracy
75
  source:
76
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2
77
  name: Open LLM Leaderboard
78
  - task:
79
  type: text-generation
@@ -89,7 +89,7 @@ model-index:
89
  - type: mc2
90
  value: 63.57
91
  source:
92
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2
93
  name: Open LLM Leaderboard
94
  - task:
95
  type: text-generation
@@ -106,7 +106,7 @@ model-index:
106
  value: 82.79
107
  name: accuracy
108
  source:
109
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2
110
  name: Open LLM Leaderboard
111
  - task:
112
  type: text-generation
@@ -123,7 +123,7 @@ model-index:
123
  value: 88.25
124
  name: accuracy
125
  source:
126
- url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2
127
  name: Open LLM Leaderboard
128
  - task:
129
  type: text-generation
@@ -138,7 +138,7 @@ model-index:
138
  value: 82.08
139
  name: strict accuracy
140
  source:
141
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2
142
  name: Open LLM Leaderboard
143
  - task:
144
  type: text-generation
@@ -153,7 +153,7 @@ model-index:
153
  value: 48.57
154
  name: normalized accuracy
155
  source:
156
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2
157
  name: Open LLM Leaderboard
158
  - task:
159
  type: text-generation
@@ -168,7 +168,7 @@ model-index:
168
  value: 22.96
169
  name: exact match
170
  source:
171
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2
172
  name: Open LLM Leaderboard
173
  - task:
174
  type: text-generation
@@ -183,7 +183,7 @@ model-index:
183
  value: 12.19
184
  name: acc_norm
185
  source:
186
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2
187
  name: Open LLM Leaderboard
188
  - task:
189
  type: text-generation
@@ -198,7 +198,7 @@ model-index:
198
  value: 15.3
199
  name: acc_norm
200
  source:
201
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2
202
  name: Open LLM Leaderboard
203
  - task:
204
  type: text-generation
@@ -215,23 +215,23 @@ model-index:
215
  value: 46.74
216
  name: accuracy
217
  source:
218
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2
219
  name: Open LLM Leaderboard
220
  ---
221
 
222
  <img src="./llama-3-merges.webp" alt="Llama-3 DPO Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
223
 
224
 
225
- # MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2
226
 
227
  This model is a fine-tune (DPO) of `meta-llama/Meta-Llama-3-70B-Instruct` model.
228
 
229
  # ⚡ Quantized GGUF
230
 
231
- All GGUF models are available here: [MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2-GGUF)
232
 
233
  # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
234
- Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__Llama-3-70B-Instruct-DPO-v0.2)
235
 
236
  | Metric |Value|
237
  |-------------------|----:|
@@ -254,7 +254,7 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
254
  |GSM8k (5-shot) |88.25|
255
 
256
  **Top 10 models on the Leaderboard**
257
- <img src="./llama-3-70b-top-10.png" alt="Llama-3-70B finet-tuned models" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
258
 
259
  # Prompt Template
260
 
@@ -273,7 +273,7 @@ This model uses `ChatML` prompt template:
273
 
274
  # How to use
275
 
276
- You can use this model by using `MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2` as the model name in Hugging Face's
277
  transformers library.
278
 
279
  ```python
@@ -281,7 +281,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
281
  from transformers import pipeline
282
  import torch
283
 
284
- model_id = "MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2"
285
 
286
  model = AutoModelForCausalLM.from_pretrained(
287
  model_id,
 
23
  model_creator: MaziyarPanahi
24
  quantized_by: MaziyarPanahi
25
  model-index:
26
+ - name: calme-2.2-llama3-70b
27
  results:
28
  - task:
29
  type: text-generation
 
40
  value: 72.53
41
  name: normalized accuracy
42
  source:
43
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3-70b
44
  name: Open LLM Leaderboard
45
  - task:
46
  type: text-generation
 
56
  value: 86.22
57
  name: normalized accuracy
58
  source:
59
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3-70b
60
  name: Open LLM Leaderboard
61
  - task:
62
  type: text-generation
 
73
  value: 80.41
74
  name: accuracy
75
  source:
76
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3-70b
77
  name: Open LLM Leaderboard
78
  - task:
79
  type: text-generation
 
89
  - type: mc2
90
  value: 63.57
91
  source:
92
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3-70b
93
  name: Open LLM Leaderboard
94
  - task:
95
  type: text-generation
 
106
  value: 82.79
107
  name: accuracy
108
  source:
109
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3-70b
110
  name: Open LLM Leaderboard
111
  - task:
112
  type: text-generation
 
123
  value: 88.25
124
  name: accuracy
125
  source:
126
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3-70b
127
  name: Open LLM Leaderboard
128
  - task:
129
  type: text-generation
 
138
  value: 82.08
139
  name: strict accuracy
140
  source:
141
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3-70b
142
  name: Open LLM Leaderboard
143
  - task:
144
  type: text-generation
 
153
  value: 48.57
154
  name: normalized accuracy
155
  source:
156
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3-70b
157
  name: Open LLM Leaderboard
158
  - task:
159
  type: text-generation
 
168
  value: 22.96
169
  name: exact match
170
  source:
171
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3-70b
172
  name: Open LLM Leaderboard
173
  - task:
174
  type: text-generation
 
183
  value: 12.19
184
  name: acc_norm
185
  source:
186
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3-70b
187
  name: Open LLM Leaderboard
188
  - task:
189
  type: text-generation
 
198
  value: 15.3
199
  name: acc_norm
200
  source:
201
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3-70b
202
  name: Open LLM Leaderboard
203
  - task:
204
  type: text-generation
 
215
  value: 46.74
216
  name: accuracy
217
  source:
218
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3-70b
219
  name: Open LLM Leaderboard
220
  ---
221
 
222
  <img src="./llama-3-merges.webp" alt="Llama-3 DPO Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
223
 
224
 
225
+ # MaziyarPanahi/calme-2.2-llama3-70b
226
 
227
  This model is a fine-tune (DPO) of `meta-llama/Meta-Llama-3-70B-Instruct` model.
228
 
229
  # ⚡ Quantized GGUF
230
 
231
+ All GGUF models are available here: [MaziyarPanahi/calme-2.2-llama3-70b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.2-llama3-70b-GGUF)
232
 
233
  # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
234
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__calme-2.2-llama3-70b)
235
 
236
  | Metric |Value|
237
  |-------------------|----:|
 
254
  |GSM8k (5-shot) |88.25|
255
 
256
  **Top 10 models on the Leaderboard**
257
+ <img src="./llama-qwen2-leaderboard.png" alt="Llama-3-70B finet-tuned models" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
258
 
259
  # Prompt Template
260
 
 
273
 
274
  # How to use
275
 
276
+ You can use this model by using `MaziyarPanahi/calme-2.2-llama3-70b` as the model name in Hugging Face's
277
  transformers library.
278
 
279
  ```python
 
281
  from transformers import pipeline
282
  import torch
283
 
284
+ model_id = "MaziyarPanahi/calme-2.2-llama3-70b"
285
 
286
  model = AutoModelForCausalLM.from_pretrained(
287
  model_id,