ACL-OCL / Base_JSON /prefixN /json /ngt /2020.ngt-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:06:42.214778Z"
},
"title": "A Deep Reinforced Model for Zero-Shot Cross-Lingual Summarization with Bilingual Semantic Similarity Rewards",
"authors": [
{
"first": "Zi-Yi",
"middle": [],
"last": "Dou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Institute Carnegie Mellon University",
"location": {}
},
"email": "zdou@cs.cmu.edu"
},
{
"first": "Sachin",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Institute Carnegie Mellon University",
"location": {}
},
"email": "sachink@cs.cmu.edu"
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Institute Carnegie Mellon University",
"location": {}
},
"email": "ytsvetko@cs.cmu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Cross-lingual text summarization aims at generating a document summary in one language given input in another language. It is a practically important but under-explored task, primarily due to the dearth of available data. Existing methods resort to machine translation to synthesize training data, but such pipeline approaches suffer from error propagation. In this work, we propose an end-to-end cross-lingual text summarization model. The model uses reinforcement learning to directly optimize a bilingual semantic similarity metric between the summaries generated in a target language and gold summaries in a source language. We also introduce techniques to pre-train the model leveraging monolingual summarization and machine translation objectives. Experimental results in both English-Chinese and English-German cross-lingual summarization settings demonstrate the effectiveness of our methods. In addition, we find that reinforcement learning models with bilingual semantic similarity as rewards generate more fluent sentences than strong baselines. 1",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Cross-lingual text summarization aims at generating a document summary in one language given input in another language. It is a practically important but under-explored task, primarily due to the dearth of available data. Existing methods resort to machine translation to synthesize training data, but such pipeline approaches suffer from error propagation. In this work, we propose an end-to-end cross-lingual text summarization model. The model uses reinforcement learning to directly optimize a bilingual semantic similarity metric between the summaries generated in a target language and gold summaries in a source language. We also introduce techniques to pre-train the model leveraging monolingual summarization and machine translation objectives. Experimental results in both English-Chinese and English-German cross-lingual summarization settings demonstrate the effectiveness of our methods. In addition, we find that reinforcement learning models with bilingual semantic similarity as rewards generate more fluent sentences than strong baselines. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Cross-lingual text summarization (XLS) is the task of compressing a long article in one language into a summary in a different language. Due to the dearth of training corpora, standard sequence-to-sequence approaches to summarization cannot be applied to this task. Traditional approaches to XLS thus follow a pipeline, for example, summarizing the article in the source language followed by translating the summary into the target language or viceversa (Wan et al., 2010; Wan, 2011) . Both of these approaches require separately trained summarization and translation models, and suffer from error propagation (Zhu et al., 2019) . Prior studies have attempted to train XLS models in an end-to-end fashion, through knowledge distillation from pre-trained machine translation (MT) or monolingual summarization models (Ayana et al., 2018; Duan et al., 2019) , but these approaches have been only shown to work for short outputs. Alternatively, Zhu et al. (2019) proposed to automatically translate source-language summaries in the training set thereby generating pseudo-reference summaries in the target language. With this parallel dataset of source documents and target summaries , an end-to-end model is trained to simultaneously summarize and translate using a multi-task objective. Although the XLS model is trained end-to-end, it is trained on MT-generated reference translations and is still prone to compounding of translation and summarization errors.",
"cite_spans": [
{
"start": 454,
"end": 472,
"text": "(Wan et al., 2010;",
"ref_id": "BIBREF21"
},
{
"start": 473,
"end": 483,
"text": "Wan, 2011)",
"ref_id": "BIBREF20"
},
{
"start": 610,
"end": 628,
"text": "(Zhu et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 815,
"end": 835,
"text": "(Ayana et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 836,
"end": 854,
"text": "Duan et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 941,
"end": 958,
"text": "Zhu et al. (2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we propose to train an end-to-end XLS model to directly generate target language summaries given the source articles by matching the semantics of the predictions with the semantics of the source language summaries. To achieve this, we use reinforcement learning (RL) with a bilingual semantic similarity metric as a reward (Wieting et al., 2019b) . This metric is computed between the machine-generated summary in the target language and the gold summary in the source language. Additionally, to better initialize our XLS model for RL, we propose a new multi-task pretraining objective based on machine translation and monolingual summarization to encode common information available from the two tasks. To enable the model to still differentiate between the two tasks, we add task specific tags to the input (Wu et al., 2016) .",
"cite_spans": [
{
"start": 337,
"end": 360,
"text": "(Wieting et al., 2019b)",
"ref_id": "BIBREF23"
},
{
"start": 823,
"end": 840,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate our proposed method on English-Chinese and English-German XLS test sets. These test corpora are constructed by first using an MTsystem to translate source summaries to the target language, and then being post-edited by human annotators. Experimental results demonstrate that just using our proposed pre-training method without fine-tuning with RL improves the bestperforming baseline by up to 0.8 ROUGE-L points. Applying reinforcement learning yields further improvements in performance by up to 0.5 ROUGE-L points. Through extensive analyses and human evaluation, we show that when the bilingual semantic similarity reward is used, our model generates summaries that are more accurate, longer, more fluent, and more relevant than summaries generated by baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we describe the details of the task and our proposed approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "We first formalize our task setup. We are given N articles and their summaries in the source language",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Description",
"sec_num": "2.1"
},
{
"text": "{(x (1) src , y (1) src ), . . . , (x (N ) src , y (N )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Description",
"sec_num": "2.1"
},
{
"text": "src )} as a training set. Our goal is to train a summarization model f (\u2022; \u03b8) which takes an article in the source language x src as input and generates its summary in a pre-specified target language\u0177 tgt = f (x src ; \u03b8). Here, \u03b8 are the learnable parameters of f . During training, no gold summary y (i) tgt is available. Our model consists of one encoder, denoted as E, which takes x src as input and generates its vector representation h. h is fed as input to two decoders. The first decoder D 1 predicts the summary in the target language (\u0177 tgt ) one token at a time. The second decoder D 2 predicts the translation of the input text (v tgt ). While both D 1 and D 2 are used during training, only D 1 is used for XLS at test time. Intuitively, we want the model to select parts of the input article which might be important for the summary and also translate them into the target language. To bias our model to encode this behavior, we propose the following algorithm for pre-training:",
"cite_spans": [
{
"start": 301,
"end": 304,
"text": "(i)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Description",
"sec_num": "2.1"
},
{
"text": "\u2022 Use a machine translation (MT) model to generate pseudo reference summaries (\u1ef9 tgt ) by translating y src to the target language. Then, translate\u1ef9 tgt back to the source language using a target-to-source MT model and discard the examples with high reconstruction errors, which are measured with ROUGE (Lin, 2004) scores. The details of this step can be found in Zhu et al. (2019) .",
"cite_spans": [
{
"start": 303,
"end": 314,
"text": "(Lin, 2004)",
"ref_id": "BIBREF9"
},
{
"start": 364,
"end": 381,
"text": "Zhu et al. (2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Description",
"sec_num": "2.1"
},
{
"text": "\u2022 Pre-train the model parameters \u03b8 using a multitask objective based on MT and monolingual summarization objectives with some simple yet effective techniques as described in \u00a72.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Description",
"sec_num": "2.1"
},
{
"text": "\u2022 Further fine-tune the model using reinforcement learning with bilingual semantic similarity metric (Wieting et al., 2019b) as reward, which is described in \u00a72.3.",
"cite_spans": [
{
"start": 101,
"end": 124,
"text": "(Wieting et al., 2019b)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Description",
"sec_num": "2.1"
},
{
"text": "Here, we describe the second step of our algorithm ( Figure 2 ). The pre-training loss we use is a weighted combination of three objectives. Similarly to Zhu et al. (2019) , we use an XLS pretraining objective and an MT pre-training objective as described below with some simple but effective improvements. We also introduce an additional objective based on distilling knowledge from a monolingual summarization model. XLS Pre-training Objective (L xls ) This objective computes the cross-entropy loss of the predictions from D 1 , considering the machine-generated summaries in the target language,\u1ef9",
"cite_spans": [
{
"start": 154,
"end": 171,
"text": "Zhu et al. (2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 53,
"end": 61,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "tgt as references, given x (i) src as inputs. Per sample, this loss can be formally written as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "L xls = M j=1 log p(\u1ef9 (i) tgt,j |\u1ef9 (i) tgt,<j , x (i) src )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "where M is the number of tokens in the summary i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "Joint Training with Machine Translation Zhu et al. 2019argue that machine translation can be considered a special case of XLS with a compression ratio of 1:1. In line with Zhu et al. 2019, we train E and D 2 as the encoder and decoder of a translation model using an MT parallel corpus {(u",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(i) src , v",
"eq_num": "(i)"
}
],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "tgt )}. The goal of this step is to make the encoder have an inductive bias towards encoding information specific to translation. Similar to L xls , the machine translation objective per training sample L mt is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "L mt = K j=1 log p(v (i) tgt,j |v (i) tgt,<j , u (i) src )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "where K is the number of tokens in v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "(i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "tgt . The L xls and L mt objectives are inspired by Zhu et al. (2019) . We propose the following two enhancements to the model to leverage better the two objectives:",
"cite_spans": [
{
"start": 52,
"end": 69,
"text": "Zhu et al. (2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "1. We share the parameters of bottom layers of the two decoders, namely D 1 and D 2 , to share common high-level representations while the parameters of the top layers more specialized to decoding are separately trained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "2. We append an artificial task tag SUM (during XLS training) and TRANS (during MT training) at the beginning of the input document to make the model aware of which kind of input it is dealing with.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "We show in \u00a74.1 that such simple modifications result in noticeable performance improvements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "Knowledge Distillation from Monolingual Summarization To bias the encoder to identify sentences which can be most relevant to the summary, first, we use an extractive monolingual summarization method to predict the probability q i of each sentence or keyword in the input article being relevant to the summary. We then distill knowledge from this model into the encoder E by making it predict these probabilities. Concretely, we append an additional output layer to the encoder of our model and it predicts the probability p i of including the i-th sentence or word in the summary. The objective is to minimize the difference between p i and q i . We use the following loss (for each sample) for the model encoder: 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L dis = 1 L L j=1 (log q j \u2212 log p j ) 2 ,",
"eq_num": "(1)"
}
],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "where L is the number of sentences or keywords in each article. Our final pre-training objective during the supervised pre-training stage is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "L sup = L xls + L mt + \u03bbL dis (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "where \u03bb is a hyper-parameter and is set to 10 in our experiments. Training with L mt requires an MT parallel corpus whereas the other two objectives utilize the cross-lingual summarization dataset. Pretraining algorithm alternates between the two parts of the objective using mini-batches from the two datasets as follows until convergence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "1. Sample a minibatch from the MT corpus",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "{(u (i) src , v",
"eq_num": "(i)"
}
],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "tgt )} and train the parameters of E and D 2 with L mt .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "2. Sample a minibatch from the XLS corpus,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "{(x (i) src ,\u1ef9",
"eq_num": "(i)"
}
],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "tgt )} and train the parameters of E and D 1 with L xls + \u03bbL dis .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Pre-Training Stage",
"sec_num": "2.2"
},
{
"text": "For XLS, the target language reference summaries (\u1ef9 tgt ) used during pre-training are automatically generated with MT models and thus they may contain errors. In this section, we describe how we further fine-tune the model using only humangenerated source language summaries (y src ) with reinforcement learning (RL). Specifically, we first feed the article x src as an input to the encoder E, and generate the target language summary\u0177 tgt using D 1 . We then compute a cross-lingual similarity metric between\u0177 tgt and y src and use it as a reward to fine-tune E and D 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning Stage",
"sec_num": "2.3"
},
{
"text": "Following Paulus et al. (2018) , we adopt two different strategies to generate\u0177 tgt at each training iteration, (a)\u0177 s tgt obtained by sampling from the softmax layer at each decoding step, and (b)\u0177 g tgt obtained by greedy decoding. The RL objective per sample is given by:",
"cite_spans": [
{
"start": 10,
"end": 30,
"text": "Paulus et al. (2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning Stage",
"sec_num": "2.3"
},
{
"text": "L rl = r(\u0177 g tgt ) \u2212 r(\u0177 s tgt ) M j=1 log p(\u0177 s tgt,i |\u0177 s tgt,<j , x),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning Stage",
"sec_num": "2.3"
},
{
"text": "(3) where r(\u2022) is the reward function. To fine-tune the model, we use the following hybrid training objective:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning Stage",
"sec_num": "2.3"
},
{
"text": "\u03b3L rl + (1 \u2212 \u03b3)L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning Stage",
"sec_num": "2.3"
},
{
"text": "xls , where \u03b3 is a scaling factor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning Stage",
"sec_num": "2.3"
},
{
"text": "We train a cross-lingual similarity model (XSIM) with the best performing model in Wieting et al. (2019b) . This model is trained using an MT parallel corpus. Using XSIM, we obtain sentence representations for both\u0177 tgt and y src and treat the cosine similarity between the two representations as the reward r(\u2022).",
"cite_spans": [
{
"start": 83,
"end": 105,
"text": "Wieting et al. (2019b)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning Stage",
"sec_num": "2.3"
},
{
"text": "3 Experimental Setup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning Stage",
"sec_num": "2.3"
},
{
"text": "We evaluate our models on English-Chinese and English-German article-summary datasets. The English-Chinese dataset is created by Zhu et al. (2019) , constructed using the CNN/DailyMail monolingual summarization corpus (Hermann et al., 2015) . The training, validation and test sets consist of about 364K, 3K and 3K samples, respectively. The English-German dataset is our contribution, constructed from the Gigaword dataset (Rush et al., 2015). We sample 2.48M training, 2K validation and 2K test samples from the dataset. Pseudoparallel corpora for both language pairs are constructed by translating the summaries to the target language (and filtered after back-translation; see \u00a72). This is done for training, validation as well as test sets. These two pseudo-parallel training sets are used for pre-training with L xls . Translated Chinese and German summaries of the test articles are then post-edited by human annotators to construct the test set for evaluating XLS. We refer the readers to (Zhu et al., 2019) for more details. For the English-Chinese dataset, we use word-based segmentation for the source (articles in English) and character-based segmentation for the target (summaries in Chinese) as in (Zhu et al., 2019) . For the English-German dataset, byte-pair encoding is used (Sennrich et al., 2016) with 60K merge operations. For machine translation and training the XSIM model, we sub-sample 5M sentences from the WMT2017 Chinese-English and WMT2014 German-English training dataset (Bojar et al., 2014 (Bojar et al., , 2017 .",
"cite_spans": [
{
"start": 129,
"end": 146,
"text": "Zhu et al. (2019)",
"ref_id": "BIBREF27"
},
{
"start": 218,
"end": 240,
"text": "(Hermann et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 996,
"end": 1014,
"text": "(Zhu et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 1211,
"end": 1229,
"text": "(Zhu et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 1291,
"end": 1314,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 1499,
"end": 1518,
"text": "(Bojar et al., 2014",
"ref_id": "BIBREF1"
},
{
"start": 1519,
"end": 1540,
"text": "(Bojar et al., , 2017",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "We use the Transformer-BASE model (Vaswani et al., 2017) as the underlying architecture for our model (E, D 1 , D 2 , extractive summarization model for distillation and baselines). We refer the reader to Vaswani et al. (2017) for hyperparameter details. In the input article, a special token SEP is added at the beginning of each sentence to mark sentence boundaries. For the CNN/DailyMail corpus, the monolingual extractive summarization used in the distillation objective has the same architecture as the encoder E and is trained the CNN/DailyMail corpus constructed by (Liu and Lapata, 2019) . To train the encoder with L dis , we take the final hidden representation of each SEP token and apply a 2-layer feed-forward network with ReLU activation in the middle layer and sigmoid at the final layer to get q i for each sentence i (see \u00a72.2).",
"cite_spans": [
{
"start": 34,
"end": 56,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 205,
"end": 226,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF19"
},
{
"start": 573,
"end": 595,
"text": "(Liu and Lapata, 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "3.2"
},
{
"text": "For the Gigaword dataset, because the inputs and outputs are typically short, we choose keywords rather than sentences as the prediction unit. Specifically, we first use TextRank (Mihalcea and Tarau, 2004) to extract all the keywords from the source document. Then, for each keyword i that appears in the target summary, the gold label q i in equation 1 is assigned to 1, and q i is assigned to 0 for keywords that do not appear in the target side.",
"cite_spans": [
{
"start": 179,
"end": 205,
"text": "(Mihalcea and Tarau, 2004)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "3.2"
},
{
"text": "We share the parameters of the bottom four layers of the decoder in the multi-task setting. We use the TRIGRAM model in (Wieting et al., 2019b,a) to measure the cross-lingual sentence semantic similarities. As pointed out in \u00a72, after the pre-training stage, we only use D 1 for XLS. The final results are obtained using only E and D 1 . We use two metrics for evaluating the performance of models: ROUGE (1, 2 and L) (Lin, 2004) and XSIM (Wieting et al., 2019b) .",
"cite_spans": [
{
"start": 120,
"end": 145,
"text": "(Wieting et al., 2019b,a)",
"ref_id": null
},
{
"start": 418,
"end": 429,
"text": "(Lin, 2004)",
"ref_id": "BIBREF9"
},
{
"start": 439,
"end": 462,
"text": "(Wieting et al., 2019b)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "3.2"
},
{
"text": "Following Paulus et al. (2018) , we select \u03b3 in equation 3 to 0.998 for the Gigaword Corpus and \u03b3 = 0.9984 for the CNN/DailyMail dataset.",
"cite_spans": [
{
"start": 10,
"end": 30,
"text": "Paulus et al. (2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "3.2"
},
{
"text": "We compare our proposed method with the following baselines:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.3"
},
{
"text": "Pipeline Approaches We report results of summarize-then-translate (SUM-TRAN) and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.3"
},
{
"text": "English-Chinese English-German ROUGE-1 ROUGE-2 ROUGE-L XSIM ROUGE-1 ROUGE-2 ROUGE-L XSIM Pipeline-Based Methods TRAN-SUM (Zhu et al., 2019) 28.19 11.40 25.77 -----SUM-TRAN (Zhu et al., 2019) 32 (Neubig et al., 2019) ). XSIM is computed between the target language system outputs and the source language reference summaries.",
"cite_spans": [
{
"start": 121,
"end": 139,
"text": "(Zhu et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 172,
"end": 190,
"text": "(Zhu et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 194,
"end": 215,
"text": "(Neubig et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "translate-then-summarize (TRAN-SUM) pipelines. These results are taken from Zhu et al. (2019) .",
"cite_spans": [
{
"start": 76,
"end": 93,
"text": "Zhu et al. (2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "We pre-train E and D 1 with only L xls without any fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE-XLS",
"sec_num": null
},
{
"text": "We pre-train E, D 1 and D 2 with L xls + L mt without using L dis . This is the best performing model in (Zhu et al., 2019) . We show their reported results as well as results from our re-implementation.",
"cite_spans": [
{
"start": 105,
"end": 123,
"text": "(Zhu et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MLE-XLS+MT",
"sec_num": null
},
{
"text": "We pre-train the model using (2) without fine-tuning with RL. We also share the decoder layers and add task specific tags to the input as described in \u00a72.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE-XLS+MT+DIS",
"sec_num": null
},
{
"text": "Using ROUGE score as a reward function has been shown to improve summarization quality for monolingual summarization models (Paulus et al., 2018) . In this baseline, we finetune the pre-trained model in the above baseline using ROUGE-L as a reward instead of the proposed XSIM. The ROUGE-L score is computed between the output of D 1 and the machine-generated summary\u1ef9 tgt .",
"cite_spans": [
{
"start": 124,
"end": 145,
"text": "(Paulus et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RL-ROUGE",
"sec_num": null
},
{
"text": "RL-ROUGE+XSIM Here, we use the average of ROUGE score and XSIM score as a reward function to fine-tune the pre-trained model (MLE-XLS+MT+DIS).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RL-ROUGE",
"sec_num": null
},
{
"text": "The main results of our experiments are summarized in Table 1 . Pipeline approaches, as expected, show the weakest performance, lagging behind even the weakest end-to-end approach by more than 5 ROUGE-L points. TRAN-SUM performs even worse than SUM-TRAN, likely because the translation model is trained on sentences and not long articles. First translating the article with many sentences introduces way more errors than translating a short summary with fewer sentences would. Using just our pre-training method as described in 2.2 (MLE-XLS+MT+DIS), our proposed model outperforms the strongest baseline (MLE-XLS+MT) in both ROUGE-L (by 0.8) and XSIM (by 0.5).",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 61,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Applying reinforcement learning to fine-tune the model with both ROUGE (RL-ROUGE), XSIM (RL-XSIM) or their mean (RL-ROUGE+XSIM) as rewards results in further improvements. Our proposed method, RL-XSIM performs the best overall, indicating the importance of using cross-lingual similarity as a reward function. RL-ROUGE uses a machine-generated reference to compute the rewards since target language summaries are unavailable, which might be a reason for its worse performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "In this section, we conduct experiments on the CNN/DailyMail dataset to establish the importance of every part of the proposed method and gain further insights into our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.1"
},
{
"text": "The results in table 1 already show that adding the knowl- Figure 3 : Reinforcement learning can make the model better at generating long summaries. We use the compare-mt tool (Neubig et al., 2019) to get these statistics. Table 3 : Effect of sharing decoder layers and adding task-specific tags edge distillation objective L dis to the pre-training leads to an improvement in performance. The intuition behind using L dis is to bias the model to (softly) select sentences in the input article that might be important for the summary. Here, we replace this soft selection with a hard selection. That is, using the monolingual extractive summarization model (as described in \u00a73.2), we extract top 5 sentences from the input article and use them as the input to the encoder instead. We compare this method with L dis as shown in Table 2 . With just MLE-XLS as the pre-training objective, EXTRACT shows improvement (albeit with lower overall numbers) in performance but leads to a decrease in performance of MLE-XLS+MT. On the other hand, using the distillation objective helps in both cases.",
"cite_spans": [
{
"start": 176,
"end": 197,
"text": "(Neubig et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 59,
"end": 67,
"text": "Figure 3",
"ref_id": null
},
{
"start": 223,
"end": 230,
"text": "Table 3",
"ref_id": null
},
{
"start": 827,
"end": 834,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Soft Distillation vs. Hard Extraction",
"sec_num": null
},
{
"text": "Method ROUGE-1 ROUGE-2 ROUGE-L MLE+XLS+MT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Distillation vs. Hard Extraction",
"sec_num": null
},
{
"text": "In Table 3 , we demonstrate that introducing simple enhancements like sharing the lower-layers of the decoder (share) and adding task-specific tags (tags) during multi-task pre-training also helps in improving the performance while at the same using fewer parameters and hence a smaller memory footprint.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect of the Sharing and Tagging Techniques",
"sec_num": null
},
{
"text": "Effect of Summary Lengths Next, we study how different baselines and our model performs with respect to generating summaries (in Chinese) of different lengths, in terms of number of characters. As shown in Figure 3 , after fine-tuning the model with RL, our proposed model becomes better at generating longer summaries than the one with only pre-training (referred to as MLE-XLS+MT+DIS in the figure) with RL-XSIM performing the best in most cases. We posit that this improvement is due to RL based fine-tuning reducing the problem of exposure bias introduced during teacher-forced pre-training, which especially helps longer generations.",
"cite_spans": [],
"ref_spans": [
{
"start": 206,
"end": 214,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect of the Sharing and Tagging Techniques",
"sec_num": null
},
{
"text": "Human Evaluation In addition to automatic evaluation, which can sometimes be misleading, we perform human evaluation of summaries generated by our models. We randomly sample 50 pairs of the model outputs from the test set and ask three human evaluators to compare the pre-trained supervised learning model and reinforcement learning models in terms of relevance and fluency. For each pair, the evaluators are asked to pick one out of: first model (MLE-XLS+MT+DIS; lose) , second model(RL models; win) or say that they prefer both or neither (tie). The results are summarized in table 4. We observe that the outputs of model trained with ROUGE-L rewards are more favored than the ones generated by only pre-trained model in terms of relevance but not fluency. This is likely because the RL-ROUGE model is trained using machinegenerated summaries as references which might lack fluency. Figure 4 displays one such example. On the other hand, cross-lingual semantic similarity as a reward results in generations which are more favored both in terms of relevance and fluency.",
"cite_spans": [],
"ref_spans": [
{
"start": 885,
"end": 893,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect of the Sharing and Tagging Techniques",
"sec_num": null
},
{
"text": "Most previous work on cross-lingual text summarization utilize either the summarize-then-translate or translate-then-summarize pipeline (Wan et al., 2010; Wan, 2011; Yao et al., 2015; Ouyang et al. ,",
"cite_spans": [
{
"start": 136,
"end": 154,
"text": "(Wan et al., 2010;",
"ref_id": "BIBREF21"
},
{
"start": 155,
"end": 165,
"text": "Wan, 2011;",
"ref_id": "BIBREF20"
},
{
"start": 166,
"end": 183,
"text": "Yao et al., 2015;",
"ref_id": "BIBREF25"
},
{
"start": 184,
"end": 197,
"text": "Ouyang et al.",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "A bill to raise the legal age to buy cigarettes was voted into law Wednesday by the City Council. New York is the largest US city to raise the purchase age above the federal limit of 18-years-old. The law is expected to go into effect early next year.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "18",
"sec_num": null
},
{
"text": "New York has become the largest purchase age in the United States. New York is not the first city to raise the legal drinking age.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sup",
"sec_num": null
},
{
"text": "18 21 18 21",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RL-ROUGE",
"sec_num": null
},
{
"text": "New York has become the largest purchase age in the United States, and the legal age has increased from 18 to 21. The City Council approved a law on Wednesday to increase the age of tobacco purchases from 18 to 21. New York is not the first city to raise the legal drinking age.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RL-ROUGE",
"sec_num": null
},
{
"text": "New York has become the largest city in the United States for buying cigarettes. The City Council approved a law on Wednesday to increase the age of tobacco purchases from 18 to 21. New York is not the first city to raise the legal drinking age. Figure 4 : Example outputs. The bilingual semantic similarity rewards can make the output more fluent than using ROUGE-L as rewards. \"Sup\" refers to the MLE-XLS+MT+DIS baseline. 2019). These methods suffer from error propagation and we have demonstrated their sub-optimal performance in our experiments. Recently, there has been some work on training models for this task in an end-to-end fashion (Ayana et al., 2018; Duan et al., 2019; Zhu et al., 2019) , but these models are trained with cross-entropy using machinegenerated summaries as references which have already lost some information in the translation step.",
"cite_spans": [
{
"start": 643,
"end": 663,
"text": "(Ayana et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 664,
"end": 682,
"text": "Duan et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 683,
"end": 700,
"text": "Zhu et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 246,
"end": 254,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "RL-XSIM 18 21",
"sec_num": null
},
{
"text": "Prior work in monolingual summarization have explored hybrid extractive and abstractive summarization objectives which inspires our distillation objective (Gehrmann et al., 2018; Hsu et al., 2018; Chen and Bansal, 2018) . This line of research mainly focus on either compressing sentences extracted by a pre-trained model or biasing the prediction towards certain words.",
"cite_spans": [
{
"start": 155,
"end": 178,
"text": "(Gehrmann et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 179,
"end": 196,
"text": "Hsu et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 197,
"end": 219,
"text": "Chen and Bansal, 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RL-XSIM 18 21",
"sec_num": null
},
{
"text": "Language generation models trained with crossentropy using teacher-forcing suffer from exposure bias and a mismatch between training and evaluation objective. To solve these issues, using reinforcement learning to fine-tune such models have been explored for monolingual summarization where ROUGE rewards is typically used (Paulus et al., 2018; Pasunuru and Bansal, 2018) . Other rewards such as BERT score have also been explored (Li et al., 2019) . Computing such rewards requires access to the gold summaries which are typically unavailable for cross-lingual summarization. This work is the first to explore using cross-lingual similarity as a reward to work around this issue.",
"cite_spans": [
{
"start": 323,
"end": 344,
"text": "(Paulus et al., 2018;",
"ref_id": "BIBREF16"
},
{
"start": 345,
"end": 371,
"text": "Pasunuru and Bansal, 2018)",
"ref_id": "BIBREF15"
},
{
"start": 431,
"end": 448,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RL-XSIM 18 21",
"sec_num": null
},
{
"text": "In this work, we propose to use reinforcement learning with a bilingual semantic similarity metric as rewards for cross-lingual document summarization. We demonstrate the effectiveness of the proposed approach in a resource-deficient setting, where target language gold summaries are not available. We also propose simple strategies to better initialize the model towards reinforcement learning by leveraging machine translation and monolingual summarization. In future work, we plan to explore methods for stabilizing reinforcement learning as well to extend our methods to other datasets and tasks, such as using the bilingual similarity metric as a reward to improve the quality of machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://github.com/zdou0830/ crosslingual_summarization_semantic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also experimented with a common distillation objective based on minimizing KL divergence, 1 n n i=1 qi log pi, but it did not perform as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to anonymous reviewers for their helpful suggestions and Chunting Zhou, Shuyan",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "Zhou for proofreading the paper. We also thank Ruihan Zhai, Zhi-Hao Zhou for the help with human evaluation and Anurag Katakkar for postediting the dataset. This material is based upon work supported by NSF grants IIS1812327 and by Amazon MLRA award. We also thank Amazon for providing GPU credits.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "67",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Zero-shot crosslingual neural headline generation",
"authors": [
{
"first": "Shi-Qi",
"middle": [],
"last": "Ayana",
"suffix": ""
},
{
"first": "Yun",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhi-Yuan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Mao-Song",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE/ACM Transactions on Audio, Speech and Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ayana, Shi-qi Shen, Yun Chen, Cheng Yang, Zhi-yuan Liu, and Mao-song Sun. 2018. Zero-shot cross- lingual neural headline generation. IEEE/ACM Transactions on Audio, Speech and Language Pro- cessing.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Findings of the 2014 workshop on statistical machine translation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Buck",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Leveling",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Pecina",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Herve",
"middle": [],
"last": "Saint-Amand",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, et al. 2014. Findings of the 2014 workshop on statistical machine translation. In Proc. WMT.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Findings of the 2017 conference on machine translation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Shujian",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, et al. 2017. Findings of the 2017 con- ference on machine translation. In Proc. WMT.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Fast abstractive summarization with reinforce-selected sentence rewriting",
"authors": [
{
"first": "Yen-Chun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yen-Chun Chen and Mohit Bansal. 2018. Fast abstrac- tive summarization with reinforce-selected sentence rewriting. In Proc. ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Zero-shot crosslingual abstractive sentence summarization through teaching generation and attention",
"authors": [
{
"first": "Xiangyu",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Mingming",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Boxing",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Weihua",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiangyu Duan, Mingming Yin, Min Zhang, Boxing Chen, and Weihua Luo. 2019. Zero-shot cross- lingual abstractive sentence summarization through teaching generation and attention. In Proc. ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bottom-up abstractive summarization",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proc. EMNLP.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Teaching machines to read and comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Kocisky",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proc. NeurIPS.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A unified model for extractive and abstractive summarization using inconsistency loss",
"authors": [
{
"first": "Wan-Ting",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "Chieh-Kai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Ming-Ying",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kerui",
"middle": [],
"last": "Min",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A unified model for extractive and abstractive summarization using inconsistency loss. In Proc. ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Deep reinforcement learning with distributional semantic rewards for abstractive summarization",
"authors": [
{
"first": "Siyao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Deren",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Pengda",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siyao Li, Deren Lei, Pengda Qin, and William Yang Wang. 2019. Deep reinforcement learning with dis- tributional semantic rewards for abstractive summa- rization. In Proc. EMNLP.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Generative adversarial network for abstractive text summarization",
"authors": [
{
"first": "Linqing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Hongyan",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linqing Liu, Yao Lu, Min Yang, Qiang Qu, Jia Zhu, and Hongyan Li. 2018. Generative adversarial net- work for abstractive text summarization. In Proc. AAAI.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Text summarization with pretrained encoders",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu and Mirella Lapata. 2019. Text summariza- tion with pretrained encoders. In Proc. EMNLP.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Textrank: Bringing order into text",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bring- ing order into text. In Proc. EMNLP.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "compare-mt: A tool for holistic comparison of language generation systems",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Zi-Yi",
"middle": [],
"last": "Dou",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Danish",
"middle": [],
"last": "Pruthi",
"suffix": ""
},
{
"first": "Xinyi",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. NAACL Demo",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Zi-Yi Dou, Junjie Hu, Paul Michel, Danish Pruthi, and Xinyi Wang. 2019. compare-mt: A tool for holistic comparison of language genera- tion systems. In Proc. NAACL Demo.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A robust abstractive system for cross-lingual summarization",
"authors": [
{
"first": "Jessica",
"middle": [],
"last": "Ouyang",
"suffix": ""
},
{
"first": "Boya",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jessica Ouyang, Boya Song, and Kathleen McKeown. 2019. A robust abstractive system for cross-lingual summarization. In Proc. NAACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multireward reinforced summarization with saliency and entailment",
"authors": [
{
"first": "Ramakanth",
"middle": [],
"last": "Pasunuru",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramakanth Pasunuru and Mohit Bansal. 2018. Multi- reward reinforced summarization with saliency and entailment. In Proc. NAACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A deep reinforced model for abstractive summarization",
"authors": [
{
"first": "Romain",
"middle": [],
"last": "Paulus",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive sum- marization. In Proc. ICLR.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A neural attention model for abstractive sentence summarization",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Alexander M Rush",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proc. EMNLP.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proc. ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. NeurIPS.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Using bilingual information for cross-language document summarization",
"authors": [
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojun Wan. 2011. Using bilingual information for cross-language document summarization. In Proc. ACL.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Cross-language document summarization based on machine translation quality prediction",
"authors": [
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Huiying",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jianguo",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojun Wan, Huiying Li, and Jianguo Xiao. 2010. Cross-language document summarization based on machine translation quality prediction. In Proc. ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Beyond bleu: Training neural machine translation with semantic similarity",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting, Taylor Berg-Kirkpatrick, Kevin Gimpel, and Graham Neubig. 2019a. Beyond bleu: Training neural machine translation with semantic similarity. In Proc. ACL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Simple and effective paraphrastic similarity from parallel translations",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting, Kevin Gimpel, Graham Neubig, and Tay- lor Berg-Kirkpatrick. 2019b. Simple and effective paraphrastic similarity from parallel translations. In Proc. ACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Google's neural machine translation system",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "Bridging the gap between human and machine translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between hu- man and machine translation. arXiv preprint arXiv:1609.08144.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Phrase-based compressive cross-language summarization",
"authors": [
{
"first": "Jin-Ge",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jianguo",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin-ge Yao, Xiaojun Wan, and Jianguo Xiao. 2015. Phrase-based compressive cross-language summa- rization. In Proc. EMNLP.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Bertscore: Evaluating text generation with bert",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Kilian",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Weinberger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09675"
]
},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Eval- uating text generation with bert. arXiv preprint arXiv:1904.09675.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "NCLS: Neural cross-lingual summarization",
"authors": [
{
"first": "Junnan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Qian",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yining",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Shaonan Wang1, and Chengqing Zong",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jia- jun Zhang, Shaonan Wang1, and Chengqing Zong. 2019. NCLS: Neural cross-lingual summarization. In Proc. EMNLP.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Along with minimizing the XLS crossentropy loss L xls , we also apply reinforcement learning to optimize the model by directly comparing the outputs with gold references in the source language.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "Illustration of the supervised pre-training stage. The model is trained with cross-lingual summarization, machine translation and distillation objectives. The parameters of bottom layers of the decoders are shared across tasks.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"html": null,
"num": null,
"text": "Performance of different models. The highest scores are in bold and statistical significance compared with the best baseline is indicated with * (p <0.05, computed using compare-mt",
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"html": null,
"num": null,
"text": "Effect of using hard (EXTRACT) vs soft (DIS) extraction of summary sentences from the input article",
"type_str": "table",
"content": "<table/>"
},
"TABREF6": {
"html": null,
"num": null,
"text": "Results showing preferences of human evaluators towards the summaries generated by the mentioned RL methods vs ones from the pre-trained model (MLE-",
"type_str": "table",
"content": "<table/>"
}
}
}
}