leaderboard-pt-pr-bot commited on
Commit
cf29b5c
1 Parent(s): 18d36a0

Adding the Open Portuguese LLM Leaderboard Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/eduagarcia-temp/portuguese-leaderboard-results-to-modelcard

The purpose of this PR is to add evaluation results from the Open Portuguese LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/eduagarcia-temp/portuguese-leaderboard-results-to-modelcard/discussions

Files changed (1) hide show
  1. README.md +169 -3
README.md CHANGED
@@ -1,8 +1,155 @@
1
  ---
2
- datasets:
3
- - adalbertojunior/openHermes_portuguese
4
  language:
5
  - pt
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  ---
7
  ## Como Utilizar
8
  ```
@@ -43,4 +190,23 @@ outputs = pipeline(
43
  top_p=0.9,
44
  )
45
  print(outputs[0]["generated_text"][len(prompt):])
46
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
2
  language:
3
  - pt
4
+ datasets:
5
+ - adalbertojunior/openHermes_portuguese
6
+ model-index:
7
+ - name: Llama-3-8B-Instruct-Portuguese-v0.4
8
+ results:
9
+ - task:
10
+ type: text-generation
11
+ name: Text Generation
12
+ dataset:
13
+ name: ENEM Challenge (No Images)
14
+ type: eduagarcia/enem_challenge
15
+ split: train
16
+ args:
17
+ num_few_shot: 3
18
+ metrics:
19
+ - type: acc
20
+ value: 64.52
21
+ name: accuracy
22
+ source:
23
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.4
24
+ name: Open Portuguese LLM Leaderboard
25
+ - task:
26
+ type: text-generation
27
+ name: Text Generation
28
+ dataset:
29
+ name: BLUEX (No Images)
30
+ type: eduagarcia-temp/BLUEX_without_images
31
+ split: train
32
+ args:
33
+ num_few_shot: 3
34
+ metrics:
35
+ - type: acc
36
+ value: 49.24
37
+ name: accuracy
38
+ source:
39
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.4
40
+ name: Open Portuguese LLM Leaderboard
41
+ - task:
42
+ type: text-generation
43
+ name: Text Generation
44
+ dataset:
45
+ name: OAB Exams
46
+ type: eduagarcia/oab_exams
47
+ split: train
48
+ args:
49
+ num_few_shot: 3
50
+ metrics:
51
+ - type: acc
52
+ value: 41.69
53
+ name: accuracy
54
+ source:
55
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.4
56
+ name: Open Portuguese LLM Leaderboard
57
+ - task:
58
+ type: text-generation
59
+ name: Text Generation
60
+ dataset:
61
+ name: Assin2 RTE
62
+ type: assin2
63
+ split: test
64
+ args:
65
+ num_few_shot: 15
66
+ metrics:
67
+ - type: f1_macro
68
+ value: 90.64
69
+ name: f1-macro
70
+ source:
71
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.4
72
+ name: Open Portuguese LLM Leaderboard
73
+ - task:
74
+ type: text-generation
75
+ name: Text Generation
76
+ dataset:
77
+ name: Assin2 STS
78
+ type: eduagarcia/portuguese_benchmark
79
+ split: test
80
+ args:
81
+ num_few_shot: 15
82
+ metrics:
83
+ - type: pearson
84
+ value: 73.89
85
+ name: pearson
86
+ source:
87
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.4
88
+ name: Open Portuguese LLM Leaderboard
89
+ - task:
90
+ type: text-generation
91
+ name: Text Generation
92
+ dataset:
93
+ name: FaQuAD NLI
94
+ type: ruanchaves/faquad-nli
95
+ split: test
96
+ args:
97
+ num_few_shot: 15
98
+ metrics:
99
+ - type: f1_macro
100
+ value: 43.97
101
+ name: f1-macro
102
+ source:
103
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.4
104
+ name: Open Portuguese LLM Leaderboard
105
+ - task:
106
+ type: text-generation
107
+ name: Text Generation
108
+ dataset:
109
+ name: HateBR Binary
110
+ type: ruanchaves/hatebr
111
+ split: test
112
+ args:
113
+ num_few_shot: 25
114
+ metrics:
115
+ - type: f1_macro
116
+ value: 63.74
117
+ name: f1-macro
118
+ source:
119
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.4
120
+ name: Open Portuguese LLM Leaderboard
121
+ - task:
122
+ type: text-generation
123
+ name: Text Generation
124
+ dataset:
125
+ name: PT Hate Speech Binary
126
+ type: hate_speech_portuguese
127
+ split: test
128
+ args:
129
+ num_few_shot: 25
130
+ metrics:
131
+ - type: f1_macro
132
+ value: 66.1
133
+ name: f1-macro
134
+ source:
135
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.4
136
+ name: Open Portuguese LLM Leaderboard
137
+ - task:
138
+ type: text-generation
139
+ name: Text Generation
140
+ dataset:
141
+ name: tweetSentBR
142
+ type: eduagarcia/tweetsentbr_fewshot
143
+ split: test
144
+ args:
145
+ num_few_shot: 25
146
+ metrics:
147
+ - type: f1_macro
148
+ value: 59.47
149
+ name: f1-macro
150
+ source:
151
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.4
152
+ name: Open Portuguese LLM Leaderboard
153
  ---
154
  ## Como Utilizar
155
  ```
 
190
  top_p=0.9,
191
  )
192
  print(outputs[0]["generated_text"][len(prompt):])
193
+ ```
194
+
195
+
196
+ # Open Portuguese LLM Leaderboard Evaluation Results
197
+
198
+ Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.4) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
199
+
200
+ | Metric | Value |
201
+ |--------------------------|---------|
202
+ |Average |**61.47**|
203
+ |ENEM Challenge (No Images)| 64.52|
204
+ |BLUEX (No Images) | 49.24|
205
+ |OAB Exams | 41.69|
206
+ |Assin2 RTE | 90.64|
207
+ |Assin2 STS | 73.89|
208
+ |FaQuAD NLI | 43.97|
209
+ |HateBR Binary | 63.74|
210
+ |PT Hate Speech Binary | 66.10|
211
+ |tweetSentBR | 59.47|
212
+