--- license: cc-by-4.0 --- This is model based on mT5-XXL that predicts a binary label for a given article and summary for Q3 (grammar), as defined in the [SEAHORSE paper](https://arxiv.org/abs/2305.13194) (Clark et al., 2023). It is trained similarly to the [TRUE paper (Honovich et al, 2022)](https://arxiv.org/pdf/2204.04991.pdf) on human ratings from the SEAHORSE dataset in 6 languages: - German - English - Spanish - Russian - Turkish - Vietnamese The input format for the model is: "premise: ARTICLE hypothesis: SUMMARY". There is also a smaller (mT5-L) version of this model, as well as metrics trained for each of the other 5 dimensions described in the original paper. The full citation for the SEAHORSE paper is: ``` @misc{clark2023seahorse, title={SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation}, author={Elizabeth Clark and Shruti Rijhwani and Sebastian Gehrmann and Joshua Maynez and Roee Aharoni and Vitaly Nikolaev and Thibault Sellam and Aditya Siddhant and Dipanjan Das and Ankur P. Parikh}, year={2023}, eprint={2305.13194}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```