asahi417 commited on
Commit
706baf5
1 Parent(s): 1b09544

commit files to HF hub

Browse files
README.md CHANGED
@@ -31,33 +31,33 @@ model-index:
31
  metrics:
32
  - name: BLEU4 (Question Answering)
33
  type: bleu4_question_answering
34
- value: 0.05
35
  - name: ROUGE-L (Question Answering)
36
  type: rouge_l_question_answering
37
- value: 0.53
38
  - name: METEOR (Question Answering)
39
  type: meteor_question_answering
40
- value: 1.48
41
  - name: BERTScore (Question Answering)
42
  type: bertscore_question_answering
43
- value: 84.7
44
  - name: MoverScore (Question Answering)
45
  type: moverscore_question_answering
46
- value: 66.2
47
  - name: AnswerF1Score (Question Answering)
48
  type: answer_f1_score__question_answering
49
- value: 0.53
50
  - name: AnswerExactMatch (Question Answering)
51
  type: answer_exact_match_question_answering
52
- value: 0.07
53
  ---
54
 
55
  # Model Card of `vocabtrimmer/mt5-small-trimmed-ko-60000-koquad-qa`
56
- This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-ko-60000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ko-60000) for question answering task on the [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
57
 
58
 
59
  ### Overview
60
- - **Language model:** [vocabtrimmer/mt5-small-trimmed-ko-60000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ko-60000)
61
  - **Language:** ko
62
  - **Training data:** [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (default)
63
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
@@ -93,16 +93,16 @@ output = pipe("question: 매드 클라운이 참가해 큰 화제를 모았던
93
 
94
  | | Score | Type | Dataset |
95
  |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
96
- | AnswerExactMatch | 0.07 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
97
- | AnswerF1Score | 0.53 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
98
- | BERTScore | 84.7 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
99
- | Bleu_1 | 0.83 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
100
- | Bleu_2 | 0.69 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
101
- | Bleu_3 | 0.47 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
102
- | Bleu_4 | 0.05 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
103
- | METEOR | 1.48 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
104
- | MoverScore | 66.2 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
105
- | ROUGE_L | 0.53 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
106
 
107
 
108
 
@@ -114,12 +114,12 @@ The following hyperparameters were used during fine-tuning:
114
  - input_types: ['paragraph_question']
115
  - output_types: ['answer']
116
  - prefix_types: None
117
- - model: vocabtrimmer/mt5-small-trimmed-ko-60000
118
  - max_length: 512
119
  - max_length_output: 32
120
- - epoch: 10
121
  - batch: 32
122
- - lr: 0.0001
123
  - fp16: False
124
  - random_seed: 1
125
  - gradient_accumulation_steps: 2
 
31
  metrics:
32
  - name: BLEU4 (Question Answering)
33
  type: bleu4_question_answering
34
+ value: 35.86
35
  - name: ROUGE-L (Question Answering)
36
  type: rouge_l_question_answering
37
+ value: 77.74
38
  - name: METEOR (Question Answering)
39
  type: meteor_question_answering
40
+ value: 55.96
41
  - name: BERTScore (Question Answering)
42
  type: bertscore_question_answering
43
+ value: 97.28
44
  - name: MoverScore (Question Answering)
45
  type: moverscore_question_answering
46
+ value: 92.46
47
  - name: AnswerF1Score (Question Answering)
48
  type: answer_f1_score__question_answering
49
+ value: 81.57
50
  - name: AnswerExactMatch (Question Answering)
51
  type: answer_exact_match_question_answering
52
+ value: 75.06
53
  ---
54
 
55
  # Model Card of `vocabtrimmer/mt5-small-trimmed-ko-60000-koquad-qa`
56
+ This model is fine-tuned version of [ckpts/mt5-small-trimmed-ko-60000](https://huggingface.co/ckpts/mt5-small-trimmed-ko-60000) for question answering task on the [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
57
 
58
 
59
  ### Overview
60
+ - **Language model:** [ckpts/mt5-small-trimmed-ko-60000](https://huggingface.co/ckpts/mt5-small-trimmed-ko-60000)
61
  - **Language:** ko
62
  - **Training data:** [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (default)
63
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
 
93
 
94
  | | Score | Type | Dataset |
95
  |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
96
+ | AnswerExactMatch | 75.06 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
97
+ | AnswerF1Score | 81.57 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
98
+ | BERTScore | 97.28 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
99
+ | Bleu_1 | 71.37 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
100
+ | Bleu_2 | 62.48 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
101
+ | Bleu_3 | 51 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
102
+ | Bleu_4 | 35.86 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
103
+ | METEOR | 55.96 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
104
+ | MoverScore | 92.46 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
105
+ | ROUGE_L | 77.74 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
106
 
107
 
108
 
 
114
  - input_types: ['paragraph_question']
115
  - output_types: ['answer']
116
  - prefix_types: None
117
+ - model: ckpts/mt5-small-trimmed-ko-60000
118
  - max_length: 512
119
  - max_length_output: 32
120
+ - epoch: 15
121
  - batch: 32
122
+ - lr: 0.0005
123
  - fp16: False
124
  - random_seed: 1
125
  - gradient_accumulation_steps: 2
eval/metric.first.answer.paragraph_question.answer.lmqg_qg_koquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 0.0068073009686840415, "Bleu_2": 0.005111497776260979, "Bleu_3": 0.002971001386257743, "Bleu_4": 0.00039100601197807793, "METEOR": 0.014455087010306053, "ROUGE_L": 0.004424524679446223, "BERTScore": 0.8473668740393592, "MoverScore": 0.6620606055202947, "AnswerF1Score": 0.4364666435426061, "AnswerExactMatch": 0.08671522719389525}, "test": {"Bleu_1": 0.008311329904207443, "Bleu_2": 0.0069253037223347335, "Bleu_3": 0.004728757032836568, "Bleu_4": 0.0005453860595603003, "METEOR": 0.014775190639889654, "ROUGE_L": 0.005340592316391333, "BERTScore": 0.8470447766689886, "MoverScore": 0.6619750696395809, "AnswerF1Score": 0.5264027220322743, "AnswerExactMatch": 0.0693721817551162}}
 
1
+ {"validation": {"Bleu_1": 0.7118608328939682, "Bleu_2": 0.6261082641370145, "Bleu_3": 0.5241052545619721, "Bleu_4": 0.3834854328398384, "METEOR": 0.5577362960886979, "ROUGE_L": 0.767547458076137, "BERTScore": 0.9716366701786856, "MoverScore": 0.9203647551982136, "AnswerF1Score": 79.6981596851524, "AnswerExactMatch": 73.08359347901491}, "test": {"Bleu_1": 0.7137291089920318, "Bleu_2": 0.6248437350095181, "Bleu_3": 0.5099545735242434, "Bleu_4": 0.35863598370102473, "METEOR": 0.5595853856844631, "ROUGE_L": 0.7774229793701365, "BERTScore": 0.9728185705145407, "MoverScore": 0.924570053934317, "AnswerF1Score": 81.57176419572936, "AnswerExactMatch": 75.06070065903573}}
eval/samples.test.hyp.paragraph_question.answer.lmqg_qg_koquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_question.answer.lmqg_qg_koquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff