Models and Datasets for Cross-Lingual Summarisation
Abstract
We present a cross-lingual summarisation corpus with long documents in a source language associated with multi-sentence summaries in a target language. The corpus covers twelve language pairs and directions for four European languages, namely Czech, English, French and German, and the methodology for its creation can be applied to several other languages. We derive cross-lingual document-summary instances from Wikipedia by combining lead paragraphs and articles' bodies from language aligned Wikipedia titles. We analyse the proposed <PRE_TAG>cross-lingual summarisation task</POST_TAG> with automatic metrics and validate it with a human study. To illustrate the utility of our dataset we report experiments with <PRE_TAG>multi-lingual pre-trained models</POST_TAG> in supervised, zero- and few-shot, and out-of-domain scenarios.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper