commit files to HF hub
Browse files
README.md
CHANGED
@@ -31,25 +31,25 @@ model-index:
|
|
31 |
metrics:
|
32 |
- name: BLEU4 (Question Answering)
|
33 |
type: bleu4_question_answering
|
34 |
-
value:
|
35 |
- name: ROUGE-L (Question Answering)
|
36 |
type: rouge_l_question_answering
|
37 |
-
value:
|
38 |
- name: METEOR (Question Answering)
|
39 |
type: meteor_question_answering
|
40 |
-
value:
|
41 |
- name: BERTScore (Question Answering)
|
42 |
type: bertscore_question_answering
|
43 |
-
value:
|
44 |
- name: MoverScore (Question Answering)
|
45 |
type: moverscore_question_answering
|
46 |
-
value:
|
47 |
- name: AnswerF1Score (Question Answering)
|
48 |
type: answer_f1_score__question_answering
|
49 |
-
value:
|
50 |
- name: AnswerExactMatch (Question Answering)
|
51 |
type: answer_exact_match_question_answering
|
52 |
-
value:
|
53 |
---
|
54 |
|
55 |
# Model Card of `lmqg/mbart-large-cc25-frquad-qa`
|
@@ -93,16 +93,16 @@ output = pipe("question: En quelle année a-t-on trouvé trace d'un haut fournea
|
|
93 |
|
94 |
| | Score | Type | Dataset |
|
95 |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
|
96 |
-
| AnswerExactMatch |
|
97 |
-
| AnswerF1Score |
|
98 |
-
| BERTScore |
|
99 |
-
| Bleu_1 |
|
100 |
-
| Bleu_2 |
|
101 |
-
| Bleu_3 |
|
102 |
-
| Bleu_4 |
|
103 |
-
| METEOR |
|
104 |
-
| MoverScore |
|
105 |
-
| ROUGE_L |
|
106 |
|
107 |
|
108 |
|
@@ -117,13 +117,13 @@ The following hyperparameters were used during fine-tuning:
|
|
117 |
- model: facebook/mbart-large-cc25
|
118 |
- max_length: 512
|
119 |
- max_length_output: 32
|
120 |
-
- epoch:
|
121 |
-
- batch:
|
122 |
-
- lr: 0.
|
123 |
- fp16: False
|
124 |
- random_seed: 1
|
125 |
-
- gradient_accumulation_steps:
|
126 |
-
- label_smoothing: 0.
|
127 |
|
128 |
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qa/raw/main/trainer_config.json).
|
129 |
|
|
|
31 |
metrics:
|
32 |
- name: BLEU4 (Question Answering)
|
33 |
type: bleu4_question_answering
|
34 |
+
value: 26.33
|
35 |
- name: ROUGE-L (Question Answering)
|
36 |
type: rouge_l_question_answering
|
37 |
+
value: 38.14
|
38 |
- name: METEOR (Question Answering)
|
39 |
type: meteor_question_answering
|
40 |
+
value: 31.8
|
41 |
- name: BERTScore (Question Answering)
|
42 |
type: bertscore_question_answering
|
43 |
+
value: 92.2
|
44 |
- name: MoverScore (Question Answering)
|
45 |
type: moverscore_question_answering
|
46 |
+
value: 77.16
|
47 |
- name: AnswerF1Score (Question Answering)
|
48 |
type: answer_f1_score__question_answering
|
49 |
+
value: 60.48
|
50 |
- name: AnswerExactMatch (Question Answering)
|
51 |
type: answer_exact_match_question_answering
|
52 |
+
value: 39.34
|
53 |
---
|
54 |
|
55 |
# Model Card of `lmqg/mbart-large-cc25-frquad-qa`
|
|
|
93 |
|
94 |
| | Score | Type | Dataset |
|
95 |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
|
96 |
+
| AnswerExactMatch | 39.34 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
97 |
+
| AnswerF1Score | 60.48 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
98 |
+
| BERTScore | 92.2 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
99 |
+
| Bleu_1 | 37.27 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
100 |
+
| Bleu_2 | 32.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
101 |
+
| Bleu_3 | 29.23 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
102 |
+
| Bleu_4 | 26.33 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
103 |
+
| METEOR | 31.8 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
104 |
+
| MoverScore | 77.16 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
105 |
+
| ROUGE_L | 38.14 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
106 |
|
107 |
|
108 |
|
|
|
117 |
- model: facebook/mbart-large-cc25
|
118 |
- max_length: 512
|
119 |
- max_length_output: 32
|
120 |
+
- epoch: 15
|
121 |
+
- batch: 32
|
122 |
+
- lr: 0.0002
|
123 |
- fp16: False
|
124 |
- random_seed: 1
|
125 |
+
- gradient_accumulation_steps: 2
|
126 |
+
- label_smoothing: 0.15
|
127 |
|
128 |
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qa/raw/main/trainer_config.json).
|
129 |
|
eval/metric.first.answer.paragraph_question.answer.lmqg_qg_frquad.default.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"validation": {"Bleu_1": 0.
|
|
|
1 |
+
{"validation": {"Bleu_1": 0.4045265721243085, "Bleu_2": 0.35834231464728317, "Bleu_3": 0.3224689757382265, "Bleu_4": 0.2904498301545984, "AnswerF1Score": 60.947732333638, "AnswerExactMatch": 34.78670012547052, "METEOR": 0.29381562220907614, "ROUGE_L": 0.38980645663087565, "BERTScore": 0.9229774555118948, "MoverScore": 0.7561687416234889}, "test": {"Bleu_1": 0.37274341521157345, "Bleu_2": 0.3261271410730797, "Bleu_3": 0.2922922705608965, "Bleu_4": 0.2632611604554427, "AnswerF1Score": 60.483258743906056, "AnswerExactMatch": 39.33500627352572, "METEOR": 0.31795391099657566, "ROUGE_L": 0.3813527939689909, "BERTScore": 0.9220344831645862, "MoverScore": 0.77162094614282}}
|
eval/samples.test.hyp.paragraph_question.answer.lmqg_qg_frquad.default.txt
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
eval/samples.validation.hyp.paragraph_question.answer.lmqg_qg_frquad.default.txt
CHANGED
The diff for this file is too large to render.
See raw diff
|
|