model update
Browse files
README.md
CHANGED
@@ -14,11 +14,11 @@ pipeline_tag: text2text-generation
|
|
14 |
tags:
|
15 |
- question generation
|
16 |
widget:
|
17 |
-
- text: "
|
18 |
example_title: "Question Generation Example 1"
|
19 |
-
- text: "
|
20 |
example_title: "Question Generation Example 2"
|
21 |
-
- text: "
|
22 |
example_title: "Question Generation Example 3"
|
23 |
model-index:
|
24 |
- name: lmqg/bart-large-squad
|
@@ -115,29 +115,6 @@ model-index:
|
|
115 |
- name: MoverScore
|
116 |
type: moverscore
|
117 |
value: 0.5604572211470809
|
118 |
-
- task:
|
119 |
-
name: Text2text Generation
|
120 |
-
type: text2text-generation
|
121 |
-
dataset:
|
122 |
-
name: lmqg/qg_squadshifts
|
123 |
-
type: default
|
124 |
-
args: default
|
125 |
-
metrics:
|
126 |
-
- name: BLEU4
|
127 |
-
type: bleu4
|
128 |
-
value: 0.07839941048417529
|
129 |
-
- name: ROUGE-L
|
130 |
-
type: rouge-l
|
131 |
-
value: 0.25357667226247294
|
132 |
-
- name: METEOR
|
133 |
-
type: meteor
|
134 |
-
value: 0.24046838149047955
|
135 |
-
- name: BERTScore
|
136 |
-
type: bertscore
|
137 |
-
value: 0.9182198703598111
|
138 |
-
- name: MoverScore
|
139 |
-
type: moverscore
|
140 |
-
value: 0.6274693859765924
|
141 |
- task:
|
142 |
name: Text2text Generation
|
143 |
type: text2text-generation
|
@@ -299,29 +276,6 @@ model-index:
|
|
299 |
- name: MoverScore
|
300 |
type: moverscore
|
301 |
value: 0.6086538514008419
|
302 |
-
- task:
|
303 |
-
name: Text2text Generation
|
304 |
-
type: text2text-generation
|
305 |
-
dataset:
|
306 |
-
name: lmqg/qg_subjqa
|
307 |
-
type: default
|
308 |
-
args: default
|
309 |
-
metrics:
|
310 |
-
- name: BLEU4
|
311 |
-
type: bleu4
|
312 |
-
value: 0.005121882223046874
|
313 |
-
- name: ROUGE-L
|
314 |
-
type: rouge-l
|
315 |
-
value: 0.1346485324169255
|
316 |
-
- name: METEOR
|
317 |
-
type: meteor
|
318 |
-
value: 0.13733272662214893
|
319 |
-
- name: BERTScore
|
320 |
-
type: bertscore
|
321 |
-
value: 0.8811488576438816
|
322 |
-
- name: MoverScore
|
323 |
-
type: moverscore
|
324 |
-
value: 0.5614233235005509
|
325 |
---
|
326 |
|
327 |
# Language Models Fine-tuning on Question Generation: `lmqg/bart-large-squad`
|
@@ -346,8 +300,7 @@ model_path = 'lmqg/bart-large-squad'
|
|
346 |
pipe = pipeline("text2text-generation", model_path)
|
347 |
|
348 |
# Question Generation
|
349 |
-
|
350 |
-
question = pipe(input_text)
|
351 |
```
|
352 |
|
353 |
## Evaluation Metrics
|
@@ -357,7 +310,7 @@ question = pipe(input_text)
|
|
357 |
|
358 |
| Dataset | Type | BLEU4 | ROUGE-L | METEOR | BERTScore | MoverScore | Link |
|
359 |
|:--------|:-----|------:|--------:|-------:|----------:|-----------:|-----:|
|
360 |
-
| [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | default | 0.
|
361 |
|
362 |
|
363 |
|
@@ -365,18 +318,16 @@ question = pipe(input_text)
|
|
365 |
|
366 |
| Dataset | Type | BLEU4 | ROUGE-L | METEOR | BERTScore | MoverScore | Link |
|
367 |
|:--------|:-----|------:|--------:|-------:|----------:|-----------:|-----:|
|
368 |
-
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | reddit | 0.
|
369 |
-
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | new_wiki | 0.
|
370 |
-
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | tripadvisor |
|
371 |
-
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
|
372 |
-
| [lmqg/
|
373 |
-
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
|
374 |
-
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
|
375 |
-
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
|
376 |
-
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
|
377 |
-
| [lmqg/
|
378 |
-
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | amazon | 0.06530369842068952 | 0.25030985091008146 | 0.2229994442645732 | 0.9092814804525936 | 0.6086538514008419 | [link](https://huggingface.co/lmqg/bart-large-squad/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.amazon.json) |
|
379 |
-
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | default | 0.005121882223046874 | 0.1346485324169255 | 0.13733272662214893 | 0.8811488576438816 | 0.5614233235005509 | [link](https://huggingface.co/lmqg/bart-large-squad/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.default.json) |
|
380 |
|
381 |
|
382 |
## Training hyperparameters
|
|
|
14 |
tags:
|
15 |
- question generation
|
16 |
widget:
|
17 |
+
- text: "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
|
18 |
example_title: "Question Generation Example 1"
|
19 |
+
- text: "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
|
20 |
example_title: "Question Generation Example 2"
|
21 |
+
- text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
|
22 |
example_title: "Question Generation Example 3"
|
23 |
model-index:
|
24 |
- name: lmqg/bart-large-squad
|
|
|
115 |
- name: MoverScore
|
116 |
type: moverscore
|
117 |
value: 0.5604572211470809
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
118 |
- task:
|
119 |
name: Text2text Generation
|
120 |
type: text2text-generation
|
|
|
276 |
- name: MoverScore
|
277 |
type: moverscore
|
278 |
value: 0.6086538514008419
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
279 |
---
|
280 |
|
281 |
# Language Models Fine-tuning on Question Generation: `lmqg/bart-large-squad`
|
|
|
300 |
pipe = pipeline("text2text-generation", model_path)
|
301 |
|
302 |
# Question Generation
|
303 |
+
question = pipe('<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.')
|
|
|
304 |
```
|
305 |
|
306 |
## Evaluation Metrics
|
|
|
310 |
|
311 |
| Dataset | Type | BLEU4 | ROUGE-L | METEOR | BERTScore | MoverScore | Link |
|
312 |
|:--------|:-----|------:|--------:|-------:|----------:|-----------:|-----:|
|
313 |
+
| [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | default | 0.262 | 0.538 | 0.271 | 0.91 | 0.65 | [link](https://huggingface.co/lmqg/bart-large-squad/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json) |
|
314 |
|
315 |
|
316 |
|
|
|
318 |
|
319 |
| Dataset | Type | BLEU4 | ROUGE-L | METEOR | BERTScore | MoverScore | Link |
|
320 |
|:--------|:-----|------:|--------:|-------:|----------:|-----------:|-----:|
|
321 |
+
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | reddit | 0.06 | 0.224 | 0.215 | 0.91 | 0.606 | [link](https://huggingface.co/lmqg/bart-large-squad/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.reddit.json) |
|
322 |
+
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | new_wiki | 0.111 | 0.297 | 0.273 | 0.932 | 0.662 | [link](https://huggingface.co/lmqg/bart-large-squad/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.new_wiki.json) |
|
323 |
+
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | tripadvisor | 0.0 | 0.14 | 0.137 | 0.889 | 0.56 | [link](https://huggingface.co/lmqg/bart-large-squad/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.tripadvisor.json) |
|
324 |
+
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | nyt | 0.081 | 0.253 | 0.253 | 0.925 | 0.641 | [link](https://huggingface.co/lmqg/bart-large-squad/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.nyt.json) |
|
325 |
+
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | restaurants | 0.0 | 0.131 | 0.124 | 0.88 | 0.554 | [link](https://huggingface.co/lmqg/bart-large-squad/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.restaurants.json) |
|
326 |
+
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | electronics | 0.009 | 0.16 | 0.153 | 0.878 | 0.563 | [link](https://huggingface.co/lmqg/bart-large-squad/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.electronics.json) |
|
327 |
+
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | books | 0.006 | 0.124 | 0.116 | 0.881 | 0.556 | [link](https://huggingface.co/lmqg/bart-large-squad/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.books.json) |
|
328 |
+
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | movies | 0.0 | 0.125 | 0.119 | 0.875 | 0.553 | [link](https://huggingface.co/lmqg/bart-large-squad/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.movies.json) |
|
329 |
+
| [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | grocery | 0.005 | 0.123 | 0.151 | 0.878 | 0.57 | [link](https://huggingface.co/lmqg/bart-large-squad/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.grocery.json) |
|
330 |
+
| [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | amazon | 0.065 | 0.25 | 0.223 | 0.909 | 0.609 | [link](https://huggingface.co/lmqg/bart-large-squad/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.amazon.json) |
|
|
|
|
|
331 |
|
332 |
|
333 |
## Training hyperparameters
|