Model Card of lmqg/mt5-base-dequad-qg
This model is fine-tuned version of google/mt5-base for question generation task on the lmqg/qg_dequad (dataset_name: default) via lmqg
.
Overview
- Language model: google/mt5-base
- Language: de
- Training data: lmqg/qg_dequad (default)
- Online Demo: https://autoqg.net/
- Repository: https://github.com/asahi417/lm-question-generation
- Paper: https://arxiv.org/abs/2210.03992
Usage
- With
lmqg
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="de", model="lmqg/mt5-base-dequad-qg")
# model prediction
questions = model.generate_q(list_context="das erste weltweit errichtete Hermann Brehmer 1855 im niederschlesischen ''Görbersdorf'' (heute Sokołowsko, Polen).", list_answer="1855")
- With
transformers
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-base-dequad-qg")
output = pipe("Empfangs- und Sendeantenne sollen in ihrer Polarisation übereinstimmen, andernfalls <hl> wird die Signalübertragung stark gedämpft. <hl>")
Evaluation
- Metric (Question Generation): raw metric file
Score | Type | Dataset | |
---|---|---|---|
BERTScore | 80.39 | default | lmqg/qg_dequad |
Bleu_1 | 10.85 | default | lmqg/qg_dequad |
Bleu_2 | 4.61 | default | lmqg/qg_dequad |
Bleu_3 | 2.06 | default | lmqg/qg_dequad |
Bleu_4 | 0.87 | default | lmqg/qg_dequad |
METEOR | 13.65 | default | lmqg/qg_dequad |
MoverScore | 55.73 | default | lmqg/qg_dequad |
ROUGE_L | 11.1 | default | lmqg/qg_dequad |
- Metric (Question & Answer Generation, Reference Answer): Each question is generated from the gold answer. raw metric file
Score | Type | Dataset | |
---|---|---|---|
QAAlignedF1Score (BERTScore) | 90.63 | default | lmqg/qg_dequad |
QAAlignedF1Score (MoverScore) | 65.32 | default | lmqg/qg_dequad |
QAAlignedPrecision (BERTScore) | 90.65 | default | lmqg/qg_dequad |
QAAlignedPrecision (MoverScore) | 65.34 | default | lmqg/qg_dequad |
QAAlignedRecall (BERTScore) | 90.61 | default | lmqg/qg_dequad |
QAAlignedRecall (MoverScore) | 65.3 | default | lmqg/qg_dequad |
- Metric (Question & Answer Generation, Pipeline Approach): Each question is generated on the answer generated by
lmqg/mt5-base-dequad-ae
. raw metric file
Score | Type | Dataset | |
---|---|---|---|
QAAlignedF1Score (BERTScore) | 76.86 | default | lmqg/qg_dequad |
QAAlignedF1Score (MoverScore) | 52.96 | default | lmqg/qg_dequad |
QAAlignedPrecision (BERTScore) | 76.28 | default | lmqg/qg_dequad |
QAAlignedPrecision (MoverScore) | 52.93 | default | lmqg/qg_dequad |
QAAlignedRecall (BERTScore) | 77.55 | default | lmqg/qg_dequad |
QAAlignedRecall (MoverScore) | 53.06 | default | lmqg/qg_dequad |
Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_dequad
- dataset_name: default
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: None
- model: google/mt5-base
- max_length: 512
- max_length_output: 32
- epoch: 17
- batch: 4
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 16
- label_smoothing: 0.15
The full configuration can be found at fine-tuning config file.
Citation
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Dataset used to train lmqg/mt5-base-dequad-qg
Evaluation results
- BLEU4 (Question Generation) on lmqg/qg_dequadself-reported0.870
- ROUGE-L (Question Generation) on lmqg/qg_dequadself-reported11.100
- METEOR (Question Generation) on lmqg/qg_dequadself-reported13.650
- BERTScore (Question Generation) on lmqg/qg_dequadself-reported80.390
- MoverScore (Question Generation) on lmqg/qg_dequadself-reported55.730
- QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] on lmqg/qg_dequadself-reported90.630
- QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] on lmqg/qg_dequadself-reported90.610
- QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] on lmqg/qg_dequadself-reported90.650
- QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] on lmqg/qg_dequadself-reported65.320
- QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] on lmqg/qg_dequadself-reported65.300