mt5-small-dequad-qg / eval /metric.middle.sentence.sentence_answer.question.asahi417_qg_dequad.default.json
asahi417's picture
model update
14df5f5
raw
history blame
471 Bytes
{"validation": {"Bleu_1": 0.10513180471064809, "Bleu_2": 0.04413386959721518, "Bleu_3": 0.018580416733130507, "Bleu_4": 8.429922086788285e-07, "METEOR": 0.10545548635687453, "ROUGE_L": 0.10277170108716055, "BERTScore": 0.7910807073855542}, "test": {"Bleu_1": 0.09784756160964449, "Bleu_2": 0.04141037086831127, "Bleu_3": 0.017262109383208275, "Bleu_4": 0.005882625214387257, "METEOR": 0.10745497159575458, "ROUGE_L": 0.09910210015939515, "BERTScore": 0.7845890840589437}}