{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:06:49.047848Z" }, "title": "A Generative Approach to Titling and Clustering Wikipedia Sections", "authors": [ { "first": "Anjalie", "middle": [], "last": "Field", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "anjalief@cs.cmu.edu" }, { "first": "Sascha", "middle": [], "last": "Rothe", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "rothe@google.com" }, { "first": "Simon", "middle": [], "last": "Baumgartner", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "simonba@google.com" }, { "first": "Cong", "middle": [], "last": "Yu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "congyu@google.com" }, { "first": "Abe", "middle": [], "last": "Ittycheriah", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "aittycheriah@google.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We evaluate the performance of transformer encoders with various decoders for information organization through a new task: generation of section headings for Wikipedia articles. Our analysis shows that decoders containing attention mechanisms over the encoder output achieve high-scoring results by generating extractive text. In contrast, a decoder without attention better facilitates semantic encoding and can be used to generate section embeddings. We additionally introduce a new loss function, which further encourages the decoder to generate high-quality embeddings. * Work done while the first author was an intern at Google Research.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We evaluate the performance of transformer encoders with various decoders for information organization through a new task: generation of section headings for Wikipedia articles. Our analysis shows that decoders containing attention mechanisms over the encoder output achieve high-scoring results by generating extractive text. In contrast, a decoder without attention better facilitates semantic encoding and can be used to generate section embeddings. We additionally introduce a new loss function, which further encourages the decoder to generate high-quality embeddings. * Work done while the first author was an intern at Google Research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Automated information labeling and organization has become a desirable way to process the copious amounts of available text. We develop methods for producing text headings and section-level embeddings through a new task: generation of section titles for Wikipedia articles. This task is useful for improving Wikipedia, an active area of research due to the long tail of poor quality articles, including articles lacking section subdivisions or consistent headings (Lebret et al., 2016; Piccardi et al., 2018; . Additionally, the types of labels used to denote sections can be useful for organizing other unstructured collections of text.", "cite_spans": [ { "start": 464, "end": 485, "text": "(Lebret et al., 2016;", "ref_id": "BIBREF11" }, { "start": 486, "end": 508, "text": "Piccardi et al., 2018;", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We approach this task in two ways: first we train a text generation model for producing section titles, and second, we leverage our model architecture to extract section embeddings, which offer a useful mechanism for comparing and clustering sections with similar information (Banerjee et al., 2007; Hu et al., 2009; Reimers et al., 2019) . This approach provides a flexible framework for creating paragraph-level embeddings, in which the type of information encoded in the embedding can be controlled by changing the generation task.", "cite_spans": [ { "start": 276, "end": 299, "text": "(Banerjee et al., 2007;", "ref_id": "BIBREF0" }, { "start": 300, "end": 316, "text": "Hu et al., 2009;", "ref_id": "BIBREF6" }, { "start": 317, "end": 338, "text": "Reimers et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Section title generation is similar to existing tasks, such as generating titles for newspaper articles (Rush et al., 2015; Nallapati et al., 2016) . However, Wikipedia section titles contain a unique mix of short abstractive headings like \"History\" and longer extractive headings like song titles, where many of the words in the section title also appear in the section text. The variations in the type of headings makes this dataset useful for analyzing how models perform on different subsets of the data.", "cite_spans": [ { "start": 104, "end": 123, "text": "(Rush et al., 2015;", "ref_id": "BIBREF25" }, { "start": 124, "end": 147, "text": "Nallapati et al., 2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A common state-of-the-art model for many existing text generation tasks uses an encoderdecoder framework where the encoder is initialized with BERT and the decoder is also a transformer (Vaswani et al., 2017; Devlin et al., 2019; Rothe et al., 2019) . The entire output of the encoder is passed to the decoder, which allows the decoder to attend over the entire input sequence during each generation step.", "cite_spans": [ { "start": 186, "end": 208, "text": "(Vaswani et al., 2017;", "ref_id": "BIBREF28" }, { "start": 209, "end": 229, "text": "Devlin et al., 2019;", "ref_id": "BIBREF3" }, { "start": 230, "end": 249, "text": "Rothe et al., 2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In contrast, we explore using transformer encoders with RNN decoders and show that RNN decoders better generate short abstractive titles while transformer decoders perform better on longer extractive titles. Embeddings extracted from the RNN decoders also perform better in clustering evaluations, which suggests that the attention-based mechanisms in the transformer facilitate copying input text into the output, but the RNN architecture better facilitates encoding semantic meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We additionally introduce a new loss function for the RNN decoder that encourages the start and end states of the RNN to be similar. This loss function encourages the model to encode meaningful information into a single state, which further improves the quality of the generated section-level embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We first describe our models (Section 2) and our data set (Section 3) and then present results, evaluating our models on a held-out test corpus (Section 5). Our main contributions include: (1) the introduction of a new short-text generation task that is useful for information labeling and organization;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) an analysis of text generation models for this task;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(3) the introduction of a novel loss function that results in high-quality section embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our primary task is to generate section titles, and our secondary task is to generate section-level embeddings. All models use an encoder-decoder architecture, where the encoder is initialized with BERT (Devlin et al., 2019) . We use 4 decoder variants, including one trained with a novel loss function. TRANS This model contains a (randomly initialized) transformer decoder, with hyperparameters identical to the BERT-base model. The hidden states generated by the encoder for the entire input sequence are passed to the decoder, thus allowing the decoder to attend over the entire input sequence during each decoding step. This model serves as our primary baseline, as it is identical to the BERT2RND model in Rothe et al. (2019) . We use the same hyperparameters as Rothe et al. (2019) , which were selected after extensive tuning.", "cite_spans": [ { "start": 203, "end": 224, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 712, "end": 731, "text": "Rothe et al. (2019)", "ref_id": "BIBREF23" }, { "start": 769, "end": 788, "text": "Rothe et al. (2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "2" }, { "text": "RNN Instead of a transformer decoder, we use an RNN, specifically a gated recurrent neural network (GRU) (Cho et al., 2014) , as the decoder. Unlike the transformer decoder, which computes attention over the full input sequence, we do not use any attention mechanisms over the input to the decoder. Instead, we only pass the last hidden layer for the first token (\"CLS\" token), forcing the model to encode all meaningful information about the input sequence into this single state. The RNN decoder, which consists of a single decoder layer, is substantially smaller than the transformer decoder used in the TRANS model. RNN+SC Our third model uses the same architecture as the RNN model, but we add an additional component to the loss function that encourages the start state and the end state of the decoder to be similar, which we call a state constraint (SC). The primary intuition behind this loss function is that it encourages the decoder to stay \"on topic\" while generating text, as it discourages the RNN from wandering too far away from where it started. It further encourages the start state to encode all information needed to generate the entire output se-quence, rather than allowing the start state to focus on information in the beginning of the sequence and the end state to encode information for the end of the sequence.", "cite_spans": [ { "start": 105, "end": 123, "text": "(Cho et al., 2014)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "2" }, { "text": "The general form for the state of an RNN decoder (Cho et al., 2014) is", "cite_spans": [ { "start": 49, "end": 67, "text": "(Cho et al., 2014)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h t = f (h t\u22121 , y t\u22121 )", "eq_num": "(1)" } ], "section": "Models", "sec_num": "2" }, { "text": "Here, f is a GRU, t \u2208 {1, . . . , T } is the target token position, and h 0 is initialized to the CLS token of the BERT source encoder. The formula for the state constraint function is given in Equation 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "d = h 0 ||h 0 || 2 \u2212 h T ||h T || 2 L SC = ||d|| 2", "eq_num": "(2)" } ], "section": "Models", "sec_num": "2" }, { "text": "The normalization terms force the loss term to focus on embedding direction rather than magnitude; they are necessary to account for the arbitrary magnitude of model states. During training, we multiply the state constraint loss, L SC , by a fixed scalar (\u03b1) and add it to the standard cross-entropy (CE) loss function. The final loss function is then given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "2" }, { "text": "L = L CE + \u03b1L SC RNN+ATTN", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "2" }, { "text": "Our final model also uses a transformer encoder and an RNN decoder. However, unlike the previous model, we pass the entire last layer of the encoder to the decoder and add an attention mechanism over this input sequence (Luong et al., 2015) . This model and the TRANS model are attention-based decoders, while the RNN and the RNN+SC models do not use attention over the decoder input.", "cite_spans": [ { "start": 220, "end": 240, "text": "(Luong et al., 2015)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "2" }, { "text": "Our primary data set consists of articles from English Language Wikipedia collected on June 25, 2019. We filter out articles that contain the word \"redirect\" and omit any section whose title has fewer than 2 characters. We extracted sections and section titles from each article and randomly divided the data into train, test, and development sets, using an 80/10/10 split (11.43M/1.43M/1.44M articles).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "Wikipedia articles are often hierarchical, containing multiple subsections. However, we make no distinction between titles that are complete sections and titles that are subsections. This lack of distinction makes the generation task harder, as our models are not able to take advantage of hierarchical information and also allows our models and results to better generalize to other data sets that do not have this hierarchy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "More detailed statistics on the data set are shown in Table 1 . For reference, we also show statistics for the commonly-used Gigaword Corpus (Rush et al., 2015), which we also use to evaluate our models in \u00a75. The Gigaword corpus entails an abstractive short summary generation task: given the first sentence of a newspaper article, predict the article title. We use this task for comparison because it uses a well-studied data set that is more similar to the Wikipedia section heading generation task than other text generation tasks, such as summarization tasks, which typically involve much longer outputs (Narayan et al., 2018) . However, as shown in Table 1 : Overview of the Wikipedia section title data, as compared with the Gigaword corpus. \"Distinct titles\" refers to the total number of titles with duplicates removed. \"Unique titles\" refers to the number of titles that occur exactly 1 time. In general, the Wikipedia titles are shorter and more repetitive than Gigaword titles.", "cite_spans": [ { "start": 609, "end": 631, "text": "(Narayan et al., 2018)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 54, "end": 61, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 655, "end": 662, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "In the Wikipedia corpus, across 14.3M data points, there are only 6.5M distinct headings (45.25% of all titles). Approximately 6M headings (41.82%) occur only 1 time in the data, meaning the other 0.5M headings are reused multiple times across 8.3M articles to constitute the remainder of the corpus. The most common heading, \"History\", occurs 480K times in the data set, making up 3.35% of the total corpus. Other common headings include \"Career\" (181K), \"Biography\" (151K), \"Early Life\" (111K), \"Background\" (102K) and \"Plot\" (96K).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "In contrast, the titles in Gigaword are generally longer and more distinctive than the Wikipedia section titles, with 80.45% of all titles being unique. However, in the absence of generic abstract headings like \"History\", the Gigaword corpus tends to be more extractive, meaning there is high tokenoverlap between articles and their titles. The Wikipedia corpus is also much larger than Gigaword, which facilitates analyses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "For all encoders, we use the BERT-base uncased model. Thus, we lowercase all text and use wordpiece tokenization from the public BERT wordpiece vocabulary (Devlin et al., 2019) . We use the same preprocessing pipeline, including word-piece tokenization, when computing target text length and extractive scores.", "cite_spans": [ { "start": 155, "end": 176, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "For all models, we limit the encoder input size to 128 tokens and the decoder output size to 32 tokens and use a batch size of 32. We generally use a learning rate of 0.05 with square root decay, 40K warm-up steps, and the Adam optimizer; however, for the RNN models with the Gigaword data, we use 100K warm-up steps, clip gradients to 20, and optimize with Adagrad, which we found to produce smoother training curves. For the state constraint models, we start by setting the scalar \u03b1 = 0, and linearly increase \u03b1 to 1, between 100k and 200k training steps. We train the RNN models for 2M steps using v100 GPUs, and we train the transformer models for 500K steps using TPUs. In practice, we find that the RNN performance stops improving within 1M steps and the transformer performance stops within 50K steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Our main task is to generate a Wikipedia section title given the section text. Table 2 reports results using standard summarization metrics: Rouge-1, Rouge-L, and exact match. Rouge-1 measures the unigram overlap between the generated text and the reference text; Rouge-L measures the longest subsequence that occurs in both the generated text and the reference; exact match measures if the generated text exactly matches the reference. The RNN+ATTN model performs the best overall. The TRANS and the RNN+SC models perform approximately the same, and both outperform the regular RNN model. Because the Wikipedia dataset contains diverse types of headings, including short abstractive headings and long extractive headings, we subdivide our test data in order to better understand model performance. In Table 3 , we examine how well these models generate outputs of different lengths by dividing the test set according to the number of tokens in the target headings.", "cite_spans": [], "ref_spans": [ { "start": 79, "end": 86, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 802, "end": 809, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Section Heading Generation", "sec_num": "5.1" }, { "text": "All of the RNN decoders outperform the transformer decoder for short headings containing 1-5 tokens, and the RNN+SC model performs the best overall. Over these short headings, the attention mechanism provides little advantage. However, the two attention-based decoders, TRANS and RNN+ATTN outperform the RNNs without attention for mid-range-length headings containing 5-10 tokens, which is consistent with prior work suggesting that attention improves the modeling of long-term dependencies (Vaswani et al., 2017) . Nevertheless, on headings with > 10 tokens, the Rouge-L scores for all decoders decline. Prior work has also examined the trend of extractiveness in text generation models, specifically observing that models achieve high performance when they can copy input tokens directly into the output, rather than having to encode semantic information and produce new tokens (Nallapati et al., 2016; Cheng and Lapata, 2016; See et al., 2017; Nallapati et al., 2017; Narayan et al., 2018; Grusky et al., 2018; Pasunuru and Bansal, 2018 ). Because we ultimately extract embeddings from our models, understanding to what extent they copy tokens or encode more abstract information offers insight into how useful we can expect embeddings to be. To examine this, we introduce a metric called extractive score, which measures what percentage of the output text can be directly copied from the input text:", "cite_spans": [ { "start": 491, "end": 513, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF28" }, { "start": 880, "end": 904, "text": "(Nallapati et al., 2016;", "ref_id": "BIBREF17" }, { "start": 905, "end": 928, "text": "Cheng and Lapata, 2016;", "ref_id": "BIBREF1" }, { "start": 929, "end": 946, "text": "See et al., 2017;", "ref_id": "BIBREF26" }, { "start": 947, "end": 970, "text": "Nallapati et al., 2017;", "ref_id": "BIBREF16" }, { "start": 971, "end": 992, "text": "Narayan et al., 2018;", "ref_id": "BIBREF18" }, { "start": 993, "end": 1013, "text": "Grusky et al., 2018;", "ref_id": "BIBREF5" }, { "start": 1014, "end": 1039, "text": "Pasunuru and Bansal, 2018", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Section Heading Generation", "sec_num": "5.1" }, { "text": "|Ttarget T input | |Ttarget|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Section Heading Generation", "sec_num": "5.1" }, { "text": ", where T target and T input represent the tokens in the target text and the input text respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Section Heading Generation", "sec_num": "5.1" }, { "text": "Thus, for a section and title pair, an extractive score of 0 indicates that there is no token overlap between the title and the section text, while a score of 1 indicates that every token in the title is also in the section text. Because of the short length of our section titles, we focus on unigrams, rather than examining higher-order n-grams. When computing extractive scores, we use the same text preprocessing pipeline as used in our models, including wordpiece tokenization and lowercasing. In Figure 1 , we limit the test data to headings with 5-10 tokens and divide it into segments according to extractive score. The RNN and RNN+SC models outperform the attention-based models on data with low extractive scores (\u2264 0.5). The higher performance of the TRANS and the RNN+ATTN models as compared to the RNN and RNN+SC models over this data segment (Table 3) is almost entirely on headings where the extractive score is \u2265 0.9. The attention-based models are not better at producing long titles in general, but rather their ability to copy from the input text allows them to generate long titles when they are extractive.", "cite_spans": [], "ref_spans": [ { "start": 501, "end": 509, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 855, "end": 864, "text": "(Table 3)", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Section Heading Generation", "sec_num": "5.1" }, { "text": "We can further examine this trend by computing correlations between Rouge-L and extractive score. Table 4 : Partial correlations between Rouge-L and extractive score, controlled for length. All values are statistically significant.", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 105, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Section Heading Generation", "sec_num": "5.1" }, { "text": "However, as Table 3 shows, all decoders perform differently over texts of different lengths. Thus, in order to isolate the effect of extractiveness, we compute partial correlations (Rummel, 1976) . The idea behind a partial correlation is to identify the relationship between two variables X and Y that is not explained by a confound Z. We first compute the residuals e X,i and e Y,i , and then compute the correlation between these residuals:", "cite_spans": [ { "start": 181, "end": 195, "text": "(Rummel, 1976)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Section Heading Generation", "sec_num": "5.1" }, { "text": "e X,i = x i \u2212 w * X , z i e Y,i = y i \u2212 w * Y , z i Partial Correlation = \u03c1 e X,i ,e Y,i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Section Heading Generation", "sec_num": "5.1" }, { "text": "where w * X and w * Y are the coefficients learned by a linear regression between X and Z and between Y and Z. In our case, X = Rouge-L, Y = extractive score, and Z = target length. Table 4 reports results. For all models, the resulting correlations are positive, indicating that they generate extractive headings better than nonextractive headings. However, the correlations for the TRANS and RNN+ATTN models are highest. Overall, these results suggest that decoders with attention mechanisms achieve high performance on this task because they better copy tokens from the input into the output, rather than because they encode more semantics. Encoding semantic information is essential for generating section embeddings, which we extract and evaluate in Section 5.3. ", "cite_spans": [], "ref_spans": [ { "start": 182, "end": 189, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Section Heading Generation", "sec_num": "5.1" }, { "text": "In order to compare our models against published benchmarks and to generalize our observations about extractiveness, we conduct the same experiments over the Gigaword corpus as the Wikipedia corpus, using the established train, test, and dev splits (Rush et al., 2015) . Table 5 reports the results of our models as well as a state-of-the-art model for reference (Song et al., 2019) . Like TRANS, the MASS model from Song et al. (2019) uses a transformer encoderdecoder architecture but with generalizations that allow for additional pre-training. From our models, the transformer decoder performs the best overall. However, the attention-based decoders TRANS and RNN+ATTN also have the highest partial correlations, suggesting much of their performance stems from extractive titles. For all models the partial correlations between Rouge-L and extractive score are higher for the Gigaword corpus than for the Wikipedia corpus. This correlation is visually evident in Figure 2 , which we constructed the same way as Figure 1 . (Figure 1) . While the TRANS model performs well across all extractiveness levels, the RNNs with and without attention perform similarly for lower levels of extractiveness. However, the RNN+ATTN begins outperforming the RNNs without attention when the extractive score is \u2265 0.5, and especially when the extractive score is \u2265 0.9. ", "cite_spans": [ { "start": 249, "end": 268, "text": "(Rush et al., 2015)", "ref_id": "BIBREF25" }, { "start": 363, "end": 382, "text": "(Song et al., 2019)", "ref_id": "BIBREF27" }, { "start": 417, "end": 435, "text": "Song et al. (2019)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 271, "end": 278, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 967, "end": 975, "text": "Figure 2", "ref_id": "FIGREF3" }, { "start": 1015, "end": 1023, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 1026, "end": 1036, "text": "(Figure 1)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Gigaword Results", "sec_num": "5.2" }, { "text": "While labeling sections can improve Wikipedia articles and identify the type of information contained in general paragraphs, embedding representations for paragraphs and documents can offer a more useful way to structure corpora, by facilitating information clustering and retrieval. Rather than creating generic all-purpose embeddings (Le and Mikolov, 2014) , our generative models facilitate creating embeddings that target specific information, in our case, the title of the section. We extract internal states from our models as section embeddings, and we evaluate them through a clustering task. Because many Wikipedia articles use the same generic headings, like \"History\" and \"Plot\", we can use these headings as gold cluster assignments by assuming that all sections with the same title constitute a cluster.", "cite_spans": [ { "start": 336, "end": 358, "text": "(Le and Mikolov, 2014)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Section-embedding Generation and Clustering", "sec_num": "5.3" }, { "text": "For all models, we use the final hidden layer for the first token in the input sequence (CLS token) as the embedding. In the case of the RNN decoder, this embedding is also the initial state of the RNN, and thus is the single state that the model is forced to encode the entire input sequence into. 1 We cluster these embeddings using k-means clustering, where we set the number of clusters to the true number of clusters in the gold cluster assignments. We discard any section titles that occur fewer than 100 times, ensuring that the minimum size of any cluster is 100, resulting in 467,286 data points and 755 clusters. The large number of data points makes this task particularly difficult. Table 6 reports results using standard metrics for evaluating a proposed cluster assignment against gold data (Hubert and Arabie, 1985; Rosenberg and Hirschberg, 2007) . Homogeneity assesses to what extent each cluster contains only members of the same class (e.g. does each cluster contain only sections with the same title?); completeness assesses to what extent members of the same class are in the same cluster (e.g. are sections with the same title in the same cluster?); V-measure is the harmonic mean between homogeneity and completeness; and adjusted Rand index (ARI) counts how many pairs of data points are assigned to the same or different clusters in the predicted and gold clusterings. On all metrics, the RNN+SC model performs the best.", "cite_spans": [ { "start": 299, "end": 300, "text": "1", "ref_id": null }, { "start": 805, "end": 830, "text": "(Hubert and Arabie, 1985;", "ref_id": "BIBREF7" }, { "start": 831, "end": 862, "text": "Rosenberg and Hirschberg, 2007)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 695, "end": 702, "text": "Table 6", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Section-embedding Generation and Clustering", "sec_num": "5.3" }, { "text": "To show how our embeddings, which are tailored to this task, differ from off-the-shelf embeddings, we report results using embeddings constructed from two popular methods for generating document embeddings: distributed representations using Doc2Vec (Le and Mikolov, 2014; Lau and Baldwin, 2016; Vu and Iyyer, 2019) and sparse embeddings using TF-IDF weighting (Banerjee et al., 2007) . We train a Doc2Vec model over the training set using a window size of 5 and embedding size of 768, to match the embedding size of our models, and then infer embeddings over the test set. For the TF-IDF vectors, we give this method an additional advantage by directly training the model over the test set with an embedding size of 1000. As expected, all of our models outperform these off-the-shelf models.", "cite_spans": [ { "start": 249, "end": 271, "text": "(Le and Mikolov, 2014;", "ref_id": "BIBREF10" }, { "start": 272, "end": 294, "text": "Lau and Baldwin, 2016;", "ref_id": "BIBREF9" }, { "start": 295, "end": 314, "text": "Vu and Iyyer, 2019)", "ref_id": "BIBREF29" }, { "start": 360, "end": 383, "text": "(Banerjee et al., 2007)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Section-embedding Generation and Clustering", "sec_num": "5.3" }, { "text": "Unlike off-the-shelf models, our customizable models encourage the embeddings to encode information specific to our prediction task. In this case, we train them to encode section title information. However, by training our models on a different prediction task, such as predicting the name of a newspaper outlet or a comment on a newspaper article, we can encourage the model to generate document embeddings that encode different information. Thus, our model architecture offers a way to generate high-quality document embeddings that encode information specific to the task at hand.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Section-embedding Generation and Clustering", "sec_num": "5.3" }, { "text": "While we introduce the task of Wikipedia section heading generation, the task of article headline generation using the Gigaword corpus has been well-studied, primarily using an encoder-decoder architecture with additional modules like attention or copy mechanisms (Rush et al., 2015; Nallapati et al., 2016) . further explore how to leverage the pretrained BERT model for abstractive summarization, primarily using the CNN/Daily Mail data set. Rothe et al. (2019) perform a comprehensive assessment of pretrained language models for text generation tasks, including the Gigaword task. Our TRANS model is identical to their BERT2RND model and achieves comparable results over the Gigaword corpus.", "cite_spans": [ { "start": 264, "end": 283, "text": "(Rush et al., 2015;", "ref_id": "BIBREF25" }, { "start": 284, "end": 307, "text": "Nallapati et al., 2016)", "ref_id": "BIBREF17" }, { "start": 444, "end": 463, "text": "Rothe et al. (2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "The high level of extraction in existing text generation tasks has motivated the use of mechanisms that explicitly copy input text into the output (See et al., 2017) or the introduction of new data sets (Narayan et al., 2018; Grusky et al., 2018) . Furthermore, models trained for extractive summarization often outperform abstractive models on abstractive data sets (Cheng and Lapata, 2016; Nallapati et al., 2016 Nallapati et al., , 2017 . Our work extends these results by showing that even abstractive models are implicitly learning extraction, as they perform better on extractive text. Our metric for measuring extractiveness is similar to the 'novel n-gram percentage' proposed by See et al. (2017) ; however, we use the same input pipeline for computing this metric as for training our models, and we correlate extractive score with performance, rather than just using it as an extrinsic measure of abstraction (Pasunuru and Bansal, 2018) .", "cite_spans": [ { "start": 147, "end": 165, "text": "(See et al., 2017)", "ref_id": "BIBREF26" }, { "start": 203, "end": 225, "text": "(Narayan et al., 2018;", "ref_id": "BIBREF18" }, { "start": 226, "end": 246, "text": "Grusky et al., 2018)", "ref_id": "BIBREF5" }, { "start": 367, "end": 391, "text": "(Cheng and Lapata, 2016;", "ref_id": "BIBREF1" }, { "start": 392, "end": 414, "text": "Nallapati et al., 2016", "ref_id": "BIBREF17" }, { "start": 415, "end": 439, "text": "Nallapati et al., , 2017", "ref_id": "BIBREF16" }, { "start": 688, "end": 705, "text": "See et al. (2017)", "ref_id": "BIBREF26" }, { "start": 919, "end": 946, "text": "(Pasunuru and Bansal, 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In our Wikipedia section heading generation task, the prevalence of generic headings makes the task more abstractive than datasets like Gigaword (Rush et al., 2015), or even other short-text generation tasks, like email subject prediction (Zhang and Tetreault, 2019) , which makes it a useful dataset for analyzing model performance. It is also extrinsically useful -most automated methods for improving Wikipedia focus on creating new content, such as through multi-document summarization or generating text from structured data (Lebret et al., 2016) . However, less than 1% of all English Wikipedia articles are considered to be of quality class good, suggesting there is a need for improving existing articles. Piccardi et al. (2018) show that many low quality articles consist of 0-1 sections and present a method for recommending new sections for an author to add to the article. Our approach offers a way to label existing paragraphs as distinct sections.", "cite_spans": [ { "start": 239, "end": 266, "text": "(Zhang and Tetreault, 2019)", "ref_id": "BIBREF31" }, { "start": 530, "end": 551, "text": "(Lebret et al., 2016)", "ref_id": "BIBREF11" }, { "start": 714, "end": 736, "text": "Piccardi et al. (2018)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Our approach also results in document embeddings, which we show can be used to cluster sections. Document embeddings are useful for a variety of tasks including news clustering (Banerjee et al., 2007; Hu et al., 2009) , argument clustering (Reimers et al., 2019) , and as features for downstream tasks like text classification (Lau and Baldwin, 2016; Liu and Lapata, 2018) . While TF-IDF vectors have historically been a popular construction method (Banerjee et al., 2007) , more recent methods have focused on distributive representations, particularly Doc2Vec, a generalization of the Word2Vec algorithm (Le and Mikolov, 2014; Lau and Baldwin, 2016; Vu and Iyyer, 2019) .", "cite_spans": [ { "start": 177, "end": 200, "text": "(Banerjee et al., 2007;", "ref_id": "BIBREF0" }, { "start": 201, "end": 217, "text": "Hu et al., 2009)", "ref_id": "BIBREF6" }, { "start": 240, "end": 262, "text": "(Reimers et al., 2019)", "ref_id": null }, { "start": 327, "end": 350, "text": "(Lau and Baldwin, 2016;", "ref_id": "BIBREF9" }, { "start": 351, "end": 372, "text": "Liu and Lapata, 2018)", "ref_id": "BIBREF14" }, { "start": 449, "end": 472, "text": "(Banerjee et al., 2007)", "ref_id": "BIBREF0" }, { "start": 606, "end": 628, "text": "(Le and Mikolov, 2014;", "ref_id": "BIBREF10" }, { "start": 629, "end": 651, "text": "Lau and Baldwin, 2016;", "ref_id": "BIBREF9" }, { "start": 652, "end": 671, "text": "Vu and Iyyer, 2019)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Finally, the growing popularity of pretrained language models like BERT has led to numerous investigations on what these models learn Goldberg, 2019; Jawahar et al., 2019) . Most investigations involve using targeted probing tasks. While our work shares similar goals, in that we investigate what type of information these models learn, we focus on data subsets and performance analysis.", "cite_spans": [ { "start": 134, "end": 149, "text": "Goldberg, 2019;", "ref_id": "BIBREF4" }, { "start": 150, "end": 171, "text": "Jawahar et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Our work offers several avenues for future exploration. We focus only on English Wikipedia. However, there are numerous language editions of Wikipedia, many of which have far fewer articles than the English edition and could benefit from tools for text generation. 2 Additionally, while we discard the hierarchical nature of Wikipedia sections, this information could offer a way to improve model performance (potentially at the cost of generalizability to other data sets). Furthermore, while we evaluate the performance of our generated section embeddings for clustering, more work is needed to assess their usefulness on other tasks, such as retrieving relevant sections from a query, measuring section similarities, or as features for text classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "7" }, { "text": "Overall, our work introduces the task of generating section titles for text. We also introduce the RNN+SC model and demonstrate how RNN decoders can be utilized for short text generation and improved section embeddings. Specifically, our method for generating text embeddings, which involves leveraging internal states of models trained for generation, allows the embeddings to contain targeted information that maximizes their usefulness for specific tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "For TRANS and RNN+ATTN, preliminary experiments showed that using this hidden state as the embedding achieved strictly better performance than other pooling possibilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://en.wikipedia.org/wiki/List_ of_Wikipedias", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank anonymous reviewers, Vidhisha Balakrishna, Keith Hall, Shan Jiang, Kevin Lin, Riley Matthews, and Yulia Tsvetkov for their helpful feedback and advice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "9" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Clustering short texts using Wikipedia", "authors": [ { "first": "Somnath", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Krishnan", "middle": [], "last": "Ramanathan", "suffix": "" }, { "first": "Ajay", "middle": [], "last": "Gupta", "suffix": "" } ], "year": 2007, "venue": "Proc. of SIGIR", "volume": "", "issue": "", "pages": "787--788", "other_ids": { "DOI": [ "10.1145/1277741.1277909" ] }, "num": null, "urls": [], "raw_text": "Somnath Banerjee, Krishnan Ramanathan, and Ajay Gupta. 2007. Clustering short texts using Wikipedia. In Proc. of SIGIR, pages 787-788.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural summarization by extracting sentences and words", "authors": [ { "first": "Jianpeng", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2016, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "484--494", "other_ids": { "DOI": [ "10.18653/v1/P16-1046" ] }, "num": null, "urls": [], "raw_text": "Jianpeng Cheng and Mirella Lapata. 2016. Neural sum- marization by extracting sentences and words. In Proc. of ACL, pages 484-494.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "1724--1734", "other_ids": { "DOI": [ "10.3115/v1/D14-1179" ] }, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proc. of EMNLP, pages 1724-1734.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proc. of NAACL, pages 4171-4186.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Assessing BERT's syntactic abilities", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.05287" ] }, "num": null, "urls": [], "raw_text": "Yoav Goldberg. 2019. Assessing BERT's syntactic abilities. arXiv preprint arXiv:1901.05287.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies", "authors": [ { "first": "Max", "middle": [], "last": "Grusky", "suffix": "" }, { "first": "Mor", "middle": [], "last": "Naaman", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "", "issue": "", "pages": "708--719", "other_ids": { "DOI": [ "10.18653/v1/N18-1065" ] }, "num": null, "urls": [], "raw_text": "Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In ACL, pages 708- 719.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Exploiting Wikipedia as external knowledge for document clustering", "authors": [ { "first": "Xiaohua", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Caimei", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Eun", "middle": [ "K" ], "last": "Park", "suffix": "" }, { "first": "Xiaohua", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2009, "venue": "Proc. of SIGKDD", "volume": "", "issue": "", "pages": "389--396", "other_ids": { "DOI": [ "10.1145/1557019.1557066" ] }, "num": null, "urls": [], "raw_text": "Xiaohua Hu, Xiaodan Zhang, Caimei Lu, Eun K Park, and Xiaohua Zhou. 2009. Exploiting Wikipedia as external knowledge for document clustering. In Proc. of SIGKDD, pages 389-396.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Comparing partitions", "authors": [ { "first": "Lawrence", "middle": [], "last": "Hubert", "suffix": "" }, { "first": "Phipps", "middle": [], "last": "Arabie", "suffix": "" } ], "year": 1985, "venue": "Journal of classification", "volume": "2", "issue": "1", "pages": "193--218", "other_ids": { "DOI": [ "10.1007/BF01908075" ] }, "num": null, "urls": [], "raw_text": "Lawrence Hubert and Phipps Arabie. 1985. Compar- ing partitions. Journal of classification, 2(1):193- 218.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "What does BERT learn about the structure of language?", "authors": [ { "first": "Ganesh", "middle": [], "last": "Jawahar", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Unicomb", "suffix": "" }, { "first": "Gerardo", "middle": [], "last": "I\u00f1iguez", "suffix": "" }, { "first": "M\u00e1rton", "middle": [], "last": "Karsai", "suffix": "" }, { "first": "Yannick", "middle": [], "last": "L\u00e9o", "suffix": "" }, { "first": "M\u00e1rton", "middle": [], "last": "Karsai", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Sarraute", "suffix": "" }, { "first": "\u00c9ric", "middle": [], "last": "Fleury", "suffix": "" } ], "year": 2019, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "3651--3657", "other_ids": { "DOI": [ "10.18653/v1/P19-1356" ] }, "num": null, "urls": [], "raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, Djam\u00e9 Seddah, Samuel Unicomb, Gerardo I\u00f1iguez, M\u00e1rton Karsai, Yannick L\u00e9o, M\u00e1rton Karsai, Carlos Sarraute,\u00c9ric Fleury, et al. 2019. What does BERT learn about the struc- ture of language? In Proc. of ACL, pages 3651- 3657.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An empirical evaluation of doc2vec with practical insights into document embedding generation", "authors": [ { "first": "Han", "middle": [], "last": "Jey", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Lau", "suffix": "" }, { "first": "", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2016, "venue": "Proc. of ACL Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "78--86", "other_ids": { "DOI": [ "10.18653/v1/W16-1609" ] }, "num": null, "urls": [], "raw_text": "Jey Han Lau and Timothy Baldwin. 2016. An em- pirical evaluation of doc2vec with practical insights into document embedding generation. In Proc. of ACL Workshop on Representation Learning for NLP, pages 78-86.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Distributed representations of sentences and documents", "authors": [ { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2014, "venue": "Proc. of ICML", "volume": "", "issue": "", "pages": "1188--1196", "other_ids": { "DOI": [ "https://dl.acm.org/doi/10.5555/3044805.3045025" ] }, "num": null, "urls": [], "raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. In Proc. of ICML, pages II1188-II1196.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Neural text generation from structured data with application to the biography domain", "authors": [ { "first": "R\u00e9mi", "middle": [], "last": "Lebret", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2016, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "1203--1213", "other_ids": { "DOI": [ "10.18653/v1/D16-1128" ] }, "num": null, "urls": [], "raw_text": "R\u00e9mi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proc. of EMNLP, pages 1203-1213.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Linguistic knowledge and transferability of contextual representations", "authors": [ { "first": "F", "middle": [], "last": "Nelson", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "E", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Peters", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "1073--1094", "other_ids": { "DOI": [ "10.18653/v1/N19-1112" ] }, "num": null, "urls": [], "raw_text": "Nelson F Liu, Matt Gardner, Yonatan Belinkov, Matthew E Peters, and Noah A Smith. 2019. Lin- guistic knowledge and transferability of contextual representations. In Proc. of NAACL, pages 1073- 1094.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Generating Wikipedia by summarizing long sequences", "authors": [ { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Etienne", "middle": [], "last": "Saleh", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Pot", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Goodrich", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Sepassi", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Shazeer", "suffix": "" } ], "year": 2018, "venue": "Proc. of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating Wikipedia by summariz- ing long sequences. In Proc. of ICLR.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Learning structured text representations", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "TACL", "volume": "6", "issue": "", "pages": "63--75", "other_ids": { "DOI": [ "10.1162/tacl_a_00005" ] }, "num": null, "urls": [], "raw_text": "Yang Liu and Mirella Lapata. 2018. Learning struc- tured text representations. TACL, 6:63-75.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Effective approaches to attention-based neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "1412--1421", "other_ids": { "DOI": [ "10.18653/v1/D15-1166" ] }, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neu- ral machine translation. In Proc. of EMNLP, pages 1412-1421.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Summarunner: A recurrent neural network based sequence model for extractive summarization of documents", "authors": [ { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Feifei", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2017, "venue": "Proc. of AAAI", "volume": "", "issue": "", "pages": "3075--3081", "other_ids": { "DOI": [ "https://dl.acm.org/doi/10.5555/3298483.3298681" ] }, "num": null, "urls": [], "raw_text": "Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based se- quence model for extractive summarization of docu- ments. In Proc. of AAAI, pages 3075-3081.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Abstractive text summarization using sequence-to-sequence rnns and beyond", "authors": [ { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Cicero Dos Santos", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "", "middle": [], "last": "Xiang", "suffix": "" } ], "year": 2016, "venue": "Proc. of CoNLL", "volume": "", "issue": "", "pages": "280--290", "other_ids": { "DOI": [ "10.18653/v1/K16-1028" ] }, "num": null, "urls": [], "raw_text": "Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstrac- tive text summarization using sequence-to-sequence rnns and beyond. In Proc. of CoNLL, pages 280- 290.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Dont give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization", "authors": [ { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "B", "middle": [], "last": "Shay", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "1797--1807", "other_ids": { "DOI": [ "10.18653/v1/D18-1206" ] }, "num": null, "urls": [], "raw_text": "Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Dont give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proc. of EMNLP, pages 1797-1807.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Multireward reinforced summarization with saliency and entailment", "authors": [ { "first": "Ramakanth", "middle": [], "last": "Pasunuru", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2018, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "646--653", "other_ids": { "DOI": [ "10.18653/v1/N18-2102" ] }, "num": null, "urls": [], "raw_text": "Ramakanth Pasunuru and Mohit Bansal. 2018. Multi- reward reinforced summarization with saliency and entailment. In Proc. of NAACL, pages 646-653.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Structuring wikipedia articles with section recommendations", "authors": [ { "first": "Tiziano", "middle": [], "last": "Piccardi", "suffix": "" }, { "first": "Michele", "middle": [], "last": "Catasta", "suffix": "" }, { "first": "Leila", "middle": [], "last": "Zia", "suffix": "" }, { "first": "Robert", "middle": [], "last": "West", "suffix": "" } ], "year": 2018, "venue": "Proc. of SIGIR", "volume": "", "issue": "", "pages": "665--674", "other_ids": { "DOI": [ "10.1145/3209978.3209984" ] }, "num": null, "urls": [], "raw_text": "Tiziano Piccardi, Michele Catasta, Leila Zia, and Robert West. 2018. Structuring wikipedia articles with section recommendations. In Proc. of SIGIR, pages 665-674.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Johannes Daxenberger, Christian Stab, and Iryna Gurevych. 2019. Classification and clustering of arguments with contextualized word embeddings", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Schiller", "suffix": "" }, { "first": "Tilman", "middle": [], "last": "Beck", "suffix": "" } ], "year": null, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "567--578", "other_ids": { "DOI": [ "10.18653/v1/P19-1054" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers, Benjamin Schiller, Tilman Beck, Jo- hannes Daxenberger, Christian Stab, and Iryna Gurevych. 2019. Classification and clustering of ar- guments with contextualized word embeddings. In Proc. of ACL, pages 567-578.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Vmeasure: A conditional entropy-based external cluster evaluation measure", "authors": [ { "first": "Andrew", "middle": [], "last": "Rosenberg", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hirschberg", "suffix": "" } ], "year": 2007, "venue": "Proc. of EMNLP-CoNLL", "volume": "", "issue": "", "pages": "410--420", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Rosenberg and Julia Hirschberg. 2007. V- measure: A conditional entropy-based external clus- ter evaluation measure. In Proc. of EMNLP-CoNLL, pages 410-420.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Leveraging pre-trained checkpoints for sequence generation tasks", "authors": [ { "first": "Sascha", "middle": [], "last": "Rothe", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Aliaksei", "middle": [], "last": "Severyn", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.12461" ] }, "num": null, "urls": [], "raw_text": "Sascha Rothe, Shashi Narayan, and Aliaksei Sev- eryn. 2019. Leveraging pre-trained checkpoints for sequence generation tasks. arXiv preprint arXiv:1907.12461.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Understanding correlation", "authors": [ { "first": "J", "middle": [], "last": "Rudolph", "suffix": "" }, { "first": "", "middle": [], "last": "Rummel", "suffix": "" } ], "year": 1976, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rudolph J Rummel. 1976. Understanding correlation. Honolulu: Department of Political Science, Univer- sity of Hawaii.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A neural attention model for abstractive sentence summarization", "authors": [ { "first": "Sumit", "middle": [], "last": "Alexander M Rush", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2015, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "379--389", "other_ids": { "DOI": [ "10.18653/v1/D15-1044" ] }, "num": null, "urls": [], "raw_text": "Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proc. of EMNLP, pages 379-389.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Get to the point: Summarization with pointergenerator networks", "authors": [ { "first": "Abigail", "middle": [], "last": "See", "suffix": "" }, { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "1073--1083", "other_ids": { "DOI": [ "10.18653/v1/P17-1099" ] }, "num": null, "urls": [], "raw_text": "Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proc. of ACL, pages 1073- 1083.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "MASS: Masked sequence to sequence pre-training for language generation", "authors": [ { "first": "Kaitao", "middle": [], "last": "Song", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proc. of ICML", "volume": "", "issue": "", "pages": "5926--5936", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie- Yan Liu. 2019. MASS: Masked sequence to se- quence pre-training for language generation. In Proc. of ICML, pages 5926-5936.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Proc. of NeurIPS", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. of NeurIPS, pages 5998-6008.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Encouraging paragraph embeddings to remember sentence identity improves classification", "authors": [ { "first": "Tu", "middle": [], "last": "Vu", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" } ], "year": 2019, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "6331--6338", "other_ids": { "DOI": [ "10.18653/v1/P19-1638" ] }, "num": null, "urls": [], "raw_text": "Tu Vu and Mohit Iyyer. 2019. Encouraging paragraph embeddings to remember sentence identity improves classification. In Proc. of ACL, pages 6331-6338.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Pretraining-based natural language generation for text summarization", "authors": [ { "first": "Haoyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Jianjun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Ji", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proc. of CoNLL", "volume": "", "issue": "", "pages": "789--797", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haoyu Zhang, Jingjing Cai, Jianjun Xu, and Ji Wang. 2019. Pretraining-based natural language genera- tion for text summarization. In Proc. of CoNLL, pages 789-797.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "This email could save your life: Introducing the task of email subject line generation", "authors": [ { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" } ], "year": 2019, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "446--456", "other_ids": { "DOI": [ "10.18653/v1/P19-1043" ] }, "num": null, "urls": [], "raw_text": "Rui Zhang and Joel Tetreault. 2019. This email could save your life: Introducing the task of email subject line generation. Proc. of ACL, pages 446-456.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "num": null, "text": "Rouge-L scores for each model over test data of length 5-10 tokens (300K test samples), segmented according to extractive score.", "type_str": "figure" }, "FIGREF3": { "uris": null, "num": null, "text": "Rouge-L scores for each model over the Gigaword test data of length 5-10 tokens, segmented according to extractive score. Each data segment contains at least 35 samples.", "type_str": "figure" }, "FIGREF4": { "uris": null, "num": null, "text": "mirrors the trend in the Wikipedia data", "type_str": "figure" }, "TABREF0": { "html": null, "text": ", there are notable differences between these data sets.", "num": null, "content": "
Wikipedia Gigaword
Total size14.3M4.4M
Train size11.43M4.2M
Test size1.43M1.9K
Dev size1.44M210K
Distinct titles45.25%80.45%
Unique titles41.82%70.28%
Most common title3.35%0.17%
Avg. words per title2.658.64
", "type_str": "table" }, "TABREF2": { "html": null, "text": "Results on Wikipedia section heading generation over the full test set.", "num": null, "content": "
# Tokens1-5 5-10 10-15 15+
Data Size1M 300K 56K9K
TRANS52.5 53.836.3 20.8
RNN54.0 39.635.1 25.3
RNN+SC55.8 44.237.7 24.0
RNN+ATTN 54.4 55.747.8 32.9
", "type_str": "table" }, "TABREF3": { "html": null, "text": "", "num": null, "content": "", "type_str": "table" }, "TABREF6": { "html": null, "text": "", "num": null, "content": "
", "type_str": "table" }, "TABREF8": { "html": null, "text": "Results on Wikipedia section clustering. The RNN+SC model performs the best on all metrics.", "num": null, "content": "
", "type_str": "table" } } } }