Papers
arxiv:2202.09583

Models and Datasets for Cross-Lingual Summarisation

Published on Feb 19, 2022
Authors:
,

Abstract

We present a cross-lingual summarisation corpus with long documents in a source language associated with multi-sentence summaries in a target language. The corpus covers twelve language pairs and directions for four European languages, namely Czech, English, French and German, and the methodology for its creation can be applied to several other languages. We derive cross-lingual document-summary instances from Wikipedia by combining lead paragraphs and articles' bodies from language aligned Wikipedia titles. We analyse the proposed <PRE_TAG>cross-lingual summarisation task</POST_TAG> with automatic metrics and validate it with a human study. To illustrate the utility of our dataset we report experiments with <PRE_TAG>multi-lingual pre-trained models</POST_TAG> in supervised, zero- and few-shot, and out-of-domain scenarios.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2202.09583 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2202.09583 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.