mt5-small-dequad-qg / eval /metric.first.answer.paragraph_sentence.question.asahi417_qg_dequad.default.json
asahi417's picture
model update
14df5f5
raw
history blame
469 Bytes
{"validation": {"Bleu_1": 0.11027082737096937, "Bleu_2": 0.043985187854042276, "Bleu_3": 0.018655108625433382, "Bleu_4": 0.004814911969115506, "METEOR": 0.1209601564076054, "ROUGE_L": 0.10602319191111659, "BERTScore": 0.8123753564651994}, "test": {"Bleu_1": 0.09250407101958856, "Bleu_2": 0.03676776361441925, "Bleu_3": 0.013890117809736372, "Bleu_4": 0.003832263365780053, "METEOR": 0.1070446364529781, "ROUGE_L": 0.09092859610277639, "BERTScore": 0.7944489307228321}}