|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:24:37.949234Z" |
|
}, |
|
"title": "Large-scale text pre-training helps with dialogue act recognition, but not without fine-tuning", |
|
"authors": [ |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Noble", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Centre for Linguistic Theory and Studies in Probability", |
|
"institution": "University of Gothenburg", |
|
"location": {} |
|
}, |
|
"email": "bill.noble@gu.se" |
|
}, |
|
{ |
|
"first": "Vladislav", |
|
"middle": [], |
|
"last": "Maraev", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Centre for Linguistic Theory and Studies in Probability", |
|
"institution": "University of Gothenburg", |
|
"location": {} |
|
}, |
|
"email": "vladislav.maraev@gu.se" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We use dialogue act recognition (DAR) to investigate how well BERT represents utterances in dialogue, and how fine-tuning and large-scale pre-training contribute to its performance. We find that while both the standard BERT pre-training and pretraining on dialogue-like data are useful, task-specific finetuning is essential for good performance.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We use dialogue act recognition (DAR) to investigate how well BERT represents utterances in dialogue, and how fine-tuning and large-scale pre-training contribute to its performance. We find that while both the standard BERT pre-training and pretraining on dialogue-like data are useful, task-specific finetuning is essential for good performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Large-scale neural language models trained on massive corpora of text data have achieved state-ofthe-art results on a variety of traditional NLP tasks. Given that dialogue, especially spoken dialogue, is radically different from the kind of data these language models are pre-trained on, it is uncertain whether they would be useful for dialogue-oriented tasks. In the example from the Switchboard corpus, shown in Table 1 , it is evident that the structure of dialogue is quite different from that of written text. Not only is the internal structure of contributions different-with features such as disfluencies, repair, incomplete sentences, and various vocal soundsbut the sequential structure of the discourse is different as well.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 415, |
|
"end": 422, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we investigate how well one such large-scale language model, BERT (Devlin et al., 2019) , represents utterances in dialogue. We use dialogue act recognition (DAR) as a proxy task, since both the internal content and the sequential structure of utterances has bearing on this task", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 102, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We have two main contributions. First we find that while standard BERT pre-training is useful, the model performs poorly without fine-tuning ( \u00a73.1). Second, we find that further pre-training with data from the target domain shows promise for dialogue, but the results are mixed when pre-training with a larger corpus of dialogical data from outside the target domain ( \u00a73.2). Example from the SWDA corpus (sw2827).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Dialogue acts:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "sd-Statement-non- opinion, sv-Statement-opinion, bd-Downplayer, b-Backchannel, x-Non-verbal.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1 Background", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The concept of a dialogue act is based on that of speech acts (Austin and Urmson, 2009) . Breaking with classical semantic theory, speech act theory considers not only the propositional content of an utterance but also the actions, such as promising or apologizing, it carries out. Dialogue acts extend the concept of the speech act, with a focus on the interactional nature of most speech.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 87, |
|
"text": "(Austin and Urmson, 2009)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Act Recognition", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "DAR is the task of labeling utterances with the dialogue act they perform from a given set of dialogue act tags. As with other sequence labeling tasks in NLP, some notion of context is helpful in DAR. One of the first performant machine learning models for DAR was a Hidden Markov Model that used various lexical and prosodic features as input (Stolcke et al., 2000) . Most successful neural approaches also model some notion of context (e.g., Kalchbrenner and Blunsom, 2013; Tran et al., 2017a; Bothe et al., 2018b,a; Zhao and Kawahara, 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 344, |
|
"end": 366, |
|
"text": "(Stolcke et al., 2000)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 444, |
|
"end": 475, |
|
"text": "Kalchbrenner and Blunsom, 2013;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 476, |
|
"end": 495, |
|
"text": "Tran et al., 2017a;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 518, |
|
"text": "Bothe et al., 2018b,a;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 519, |
|
"end": 543, |
|
"text": "Zhao and Kawahara, 2018)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Act Recognition", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "Transfer learning techniques allow a model trained on one task-often unsupervised-to be applied to another. Since annotating natural language data is expensive, there is a lot of interest in transfer learning for natural language processing. Word vectors (e.g., Mikolov et al., 2013; Pennington et al., 2014) are a ubiquitous example of transfer learning in NLP. We note, however, that pre-trained word vectors are not always useful when applied to dialogue (Cerisara et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 262, |
|
"end": 283, |
|
"text": "Mikolov et al., 2013;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 284, |
|
"end": 308, |
|
"text": "Pennington et al., 2014)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 458, |
|
"end": 481, |
|
"text": "(Cerisara et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transfer learning for NLP", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "BERT, a multi-layer transformer model (Devlin et al., 2019) , is pre-trained on two unsupervised tasks: masked token prediction and next sentence prediction. In masked token prediction, some percentage of words are randomly replaced with a mask token. The model is trained to predict the identity of the original token based on the context sentence. In next sentence prediction, the model is given two sentences and trained to predict whether the second sentence follows the first in the original text or if it was randomly chosen from elsewhere in the corpus. After pre-training, BERT can be applied to a supervised task by adding additional un-trained layers that take the hidden state of one or more of BERT's layers as input.", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 59, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transfer learning for NLP", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "There is some previous work applying BERT to dialogue. Bao et al. (2020) and Chen et al. (2019) both use BERT for dialogue generation tasks. Similarly, Vig and Ramea (2019) find BERT useful for selecting a response from a list of candidate responses in a dialogue. Mehri et al. (2019) evaluate BERT in various dialogue tasks including DAR, and find that a model incorporating BERT outperforms a baseline model. Finally, Chakravarty et al. (2019) use BERT for dialogue act classification for a proprietary domain and achieves promising results, and Ribeiro et al. (2019) surpass the previous state-of-the-art on generic dialogue act recognition for Switchboard and MRDA corpora. This paper aims to supplement the findings of previous work by investigating how much of BERT's success for dialogue tasks is due to its extensive pre-training and how much is due to task-specific fine-tuning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 55, |
|
"end": 72, |
|
"text": "Bao et al. (2020)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 77, |
|
"end": 95, |
|
"text": "Chen et al. (2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 265, |
|
"end": 284, |
|
"text": "Mehri et al. (2019)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 548, |
|
"end": 569, |
|
"text": "Ribeiro et al. (2019)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transfer learning for NLP", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "Fine-tuning vs. further in-domain pre-training We experiment with the following two transfer learning strategies (Sun et al., 2019) : further pretraining, in which the model is trained in an un-supervised way, similar to its initial training scheme, but on data that is in-domain for the target task; and single-task fine-tuning, in which the Whether or not the encoder model has undergone further in-domain pre-training, there remains a choice of whether to fine-tune during task training, or simply extract features from the encoder model without training it (i.e., freezing). Freezing the encoder model is more efficient, since the gradient of the loss function need only be computed for the task-specific layers. However, fine-tuning can lead to better performance since the encoding itself is adapted to the target task and domain. Peters et al. (2019) investigate when it is best to fine-tune BERT for sentence classification tasks and find that when the target task is very similar to the pre-training task, fine-tuning provides less of a performance boost. We note that there is some conceptual relationship between DAR and next sentence prediction, since the dialogue act constrains (or at least is predictive of) the dialogue act that follows it. That said, the discourse strucutre of the encyclopedia and book data that makes up BERT's pre-training corpus is probably quite different from that of natural dialogue.", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 131, |
|
"text": "(Sun et al., 2019)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 837, |
|
"end": 857, |
|
"text": "Peters et al. (2019)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transfer learning for NLP", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "We perform experiments on the Switchboard Dialogue Act Corpus (SWDA), which is a subset of the larger Switchboard corpus, and the dialogue acttagged portion of the AMI Meeting Corpus (AMI-DA). SWDA is tagged with a set of 220 dialogue act tags which, following Jurafsky et al. (1997) , we cluster into a smaller set of 42 tags. AMI uses a smaller tagset of 16 dialogue acts (Carletta, 2007) . See Table 2 for details.", |
|
"cite_spans": [ |
|
{ |
|
"start": 261, |
|
"end": 283, |
|
"text": "Jurafsky et al. (1997)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 390, |
|
"text": "(Carletta, 2007)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 397, |
|
"end": 404, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Preprocessing We make an effort to normalize transcription conventions across SWDA and AMI.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We remove disfluency annotations and slashes from the end of utterances in SWDA. In both corpora, acronyms are tokenized as individual letters. All utterances are lower-cased.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Utterances are tokenized with BERT's word piece tokenizer with a vocabulary of 30,000. To this vocabulary we added five speaker tokens and prepend each utterance with a speaker token that uniquely identifies the corresponding speaker within that dialogue.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We also experiment with three unlabeled dialogue corpora, which we use to provide further pretraining for the BERT encoder.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-training corpora", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The first two corpora are constructed from the same source as the dialogue act corpora. We use the SWDA portion of the un-labeled Switchboard corpus (SWBD) and the entire AMI corpus (including the 32 dialogues with no human-annotated DA tags that are not included in the DAR training set). In both cases, we exclude dialogues that are reserved for DAR testing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-training corpora", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We also experiment with a much larger a corpus (350M tokens) constructed from OpenSubtitles (Lison and Tiedemann, 2016). Because utterances are not labeled with speaker, we randomly assigned a speaker token to each utterance to maintain the format of the other dialogue corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-training corpora", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The pre-training corpora were prepared for the combined masked language modeling and next sentence (utterance) prediction task, as described by Devlin et al. (2019) . For the smaller SWBD and AMI corpora, we generate and train on multiple epochs of data. Since there is randomness in the data preparation (e.g., which distractor sentences are chosen and which tokens are masked), we generate each training epoch separately. 1", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 164, |
|
"text": "Devlin et al. (2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-training corpora", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We use a simple neural architecture with two components: an encoder that vectorizes utterances (BERT), and single-layer RNN sequence model that takes the utterance representations as input. 2 At each time step, the RNN takes the encoded utterance as input and its hidden state is passed to a linear layer with softmax over dialogue act tags. 3 Conceptually, the encoded utterance represents the context-agnostic features of the utterance, and the hidden state of the RNN represents the full discourse context.", |
|
"cite_spans": [ |
|
{ |
|
"start": 190, |
|
"end": 191, |
|
"text": "2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 343, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For the BERT utterance encoder, we use the BERT BASE model with hidden size of 768 and 12 transformer layers and self-attention heads (Devlin et al., 2019, \u00a73.1) . In our implementation, we use the un-cased model provided by Wolf et al. (2020) . The RNN has a hidden layer size of 100.", |
|
"cite_spans": [ |
|
{ |
|
"start": 134, |
|
"end": 161, |
|
"text": "(Devlin et al., 2019, \u00a73.1)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 243, |
|
"text": "Wolf et al. (2020)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "First, we analyze how pre-training affects BERT's performance as an utterance encoder. To do so, we consider the performance of DAR models with three different utterance encoders:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-training vs. fine-tuning", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 BERT-FT -pre-trained + DAR fine-tuning \u2022 BERT-FZ -pre-trained, frozen during DAR \u2022 BERT-RI -random init. + DAR fine-tuning BERT-FT is more accurate than BERT-RI by several percentage points on both DA corpora, suggesting that BERT's extensive pre-training does provide some useful information for DAR (Table 3). This performance boost is much more pronounced in the macro-averaged F1 score, 4 which is explained by the fact that at the tag level, pretraining has a larger impact on less frequent tags (see Figure 1 in the supplementary materials) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 508, |
|
"end": 548, |
|
"text": "Figure 1 in the supplementary materials)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Pre-training vs. fine-tuning", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The BERT-FZ performs very poorly compared to either BERT-FT or BERT-RI, however. It is heavily biased towards the most frequent tags, which explains its especially poor macro-F1 score (Table 3 ). In SWDA, for example, the model with a frozen encoder predicts one of the two most common tags (Statement-non-opinion or Acknowledge) 86% of the time, whereas those two tags account for only 51% of the ground truth tags. BERT-FT is much less biased; it predicts the two most common tags only 59% of the time.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 192, |
|
"text": "(Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Pre-training vs. fine-tuning", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Next, we assess the effect of additional dialogue pre-training on BERT's performance as an utter-ance encoder. 5 Sun et al. (2019) has reported that performing additional pre-training on unlabeled in-domain data improves performance on classification tasks. We want to see if BERT can benefit from pre-training on dialogue data, including from data outside the immediate target domain.", |
|
"cite_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 112, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 113, |
|
"end": 130, |
|
"text": "Sun et al. (2019)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Impact of dialogue pre-training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For each of the target corpora (SWDA and AMI-DA), we compare four different pre-training conditions: The in-domain corpus (ID), consisting of the AMI pre-training corpus for the AMI-DA model and the SWBD pre-traning corpus for the SWDA model; the cross-domain corpus (CC), consisting of both the AMI and SWBD pre-training corpora; and finally the OpenSubtitles corpus (OS). As before, we experiment with both frozen and fine-tuned models at the task training stage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Impact of dialogue pre-training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We performed 10 epochs of pre-training on the in-domain models and 5 epochs of pre-training on the cross-domain models so that the total amount of training data was comparable. The OpenSubtitles models were trained for only one epoch but with much more total training time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Impact of dialogue pre-training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In the fine-tuned condition, additional pretraining offers a modest boost in overall accuracy and a substantial boost to the macro-F1 scores, with the cross-domain corpus providing the largest boost. In the frozen condition, only the very large OpenSubtitles corpus is helpful, suggesting that when adapting BERT to dialogue, the size of the corpus is more important than its quality or fidelity to the target domain. Still, pre-training provides nowhere near the performance improvement achieved by fine-tuning on the target task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Impact of dialogue pre-training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "A key aspiration of transfer learning is to expose the model to phenomena that are too infrequent to learn from labeled training data alone. We show some evidence of that here. Pre-trained BERT-FT performs better on infrequent dialogue acts than BERT-RI, suggesting it draws on the extensive pre-training to represent infrequent features of those utterances. Indeed, a simple lexical probe supports this explanation: in utterances where the pre-trained model is correct and the randomly initialized model is not, the rarest word is 1.9 times rarer on average than is typical of corpus as a whole. Table 3 : Comparison of macro-F1 and accuracy with further in-domain (ID), cross-domain corpus (CC), and OpenSubtitles (OS) dialogue pre-training, for the frozen (FZ) and fine-tuned (FT) conditions. BERT-RI uses a randomly initialized utterance encoder with no pre-training but with fine-tuning.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 597, |
|
"end": 604, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In spite of that, the representations learned through pre-training are simply not performant without task-specific fine-tuning, suggesting that they are fundamentally lacking in information that is important for the dialogue context. We should note that this is in stark contrast to many other non-dialogical semantic tasks, where frozen BERT performs on par or better than the fine-tuned model (Peters et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 395, |
|
"end": 416, |
|
"text": "(Peters et al., 2019)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "By performing additional pre-training on a large dialogue-like corpus (OpenSubtitles), we were able to raise the performance of the frozen encoder by a small amount. This deserves further investigation. Bao et al. (2020) find that further pre-training BERT on a large-scale Reddit and Twitter corpus is helpful for response selection, but given the unimpressive results with subtitles, it remains an open question how well the text chat and social media domains transfer to natural dialogue.", |
|
"cite_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 220, |
|
"text": "Bao et al. (2020)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "There is also abundant room to investigate how speech-related information, such as laughter, prosody, and disfluencies can be incorporated into a DAR model that uses pre-trained features. Stolcke et al. (2000) showed, for example, that dialogue acts can have specific prosodic manifestations that can be used to improve dialogue act classification. Incorporating such information is crucial if models pre-trained on large-scale text corpora are to be adapted for use in dialogue applications.", |
|
"cite_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 209, |
|
"text": "Stolcke et al. (2000)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For details, see the finetuning example from Hugging Face.2 We have experimented with LSTM as the sequence model, but the accuracy was not significantly different compared to RNN. It can be explained by the absence of longer distance dependencies on this level of our model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Other work has shown that DAR benefits from more sophisticated decoding, such as conditional random field(Chen et al., 2018) and uncertainty propagation (Tran et al., 2017b).4 We report both accuracy (which is equal to microaveraged or class-weighted F1) and macro-F1, which is the unweighted average of the F1 scores of each class.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In-domain pre-training is sometimes referred to as finetuning, but we reserve that term for task-specific training on labeled data.6 Kozareva and Ravi (2019)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "A c k n o w le d g e ( B a c k c h a n n e l) S ta te m e n to p in io n S e g m e n t ( m u lt iu tt e r a n c e ) A b a n ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "How to Do Things with Words: The William James Lectures Delivered at Harvard University in 1955", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Austin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"O" |
|
], |
|
"last": "Urmson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John L. Austin and James O. Urmson. 2009. How to Do Things with Words: The William James Lec- tures Delivered at Harvard University in 1955, 2. ed., [repr.] edition. Harvard Univ. Press, Cambridge, Mass. OCLC: 935786421.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable", |
|
"authors": [ |
|
{ |
|
"first": "Siqi", |
|
"middle": [], |
|
"last": "Bao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huang", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haifeng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "85--96", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.9" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siqi Bao, Huang He, Fan Wang, Hua Wu, and Haifeng Wang. 2020. PLATO: Pre-trained Dialogue Genera- tion Model with Discrete Latent Variable. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 85-96, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Conversational analysis using utterance-level attention-based bidirectional recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Chandrakant", |
|
"middle": [], |
|
"last": "Bothe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sven", |
|
"middle": [], |
|
"last": "Magg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cornelius", |
|
"middle": [], |
|
"last": "Weber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Wermter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proc", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "996--1000", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chandrakant Bothe, Sven Magg, Cornelius Weber, and Stefan Wermter. 2018a. Conversational analysis us- ing utterance-level attention-based bidirectional re- current neural networks. Proc. Interspeech 2018, pages 996-1000.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A Context-based Approach for Dialogue Act Recognition using Simple Recurrent Neural Networks", |
|
"authors": [ |
|
{ |
|
"first": "Chandrakant", |
|
"middle": [], |
|
"last": "Bothe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cornelius", |
|
"middle": [], |
|
"last": "Weber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sven", |
|
"middle": [], |
|
"last": "Magg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Wermter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chandrakant Bothe, Cornelius Weber, Sven Magg, and Stefan Wermter. 2018b. A Context-based Approach for Dialogue Act Recognition using Simple Re- current Neural Networks. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Unleashing the killer corpus: Experiences in creating the multi-everything AMI Meeting Corpus. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Carletta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "41", |
|
"issue": "", |
|
"pages": "181--190", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/s10579-007-9040-x" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean Carletta. 2007. Unleashing the killer corpus: Experiences in creating the multi-everything AMI Meeting Corpus. Language Resources and Evalu- ation, 41(2):181-190.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "On the effects of using word2vec representations in neural networks for dialogue act recognition", |
|
"authors": [ |
|
{ |
|
"first": "Christophe", |
|
"middle": [], |
|
"last": "Cerisara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Kr\u00e1l", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ladislav", |
|
"middle": [], |
|
"last": "Lenc", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Computer Speech & Language", |
|
"volume": "47", |
|
"issue": "", |
|
"pages": "175--193", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.csl.2017.07.009" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christophe Cerisara, Pavel Kr\u00e1l, and Ladislav Lenc. 2017. On the effects of using word2vec representa- tions in neural networks for dialogue act recognition. Computer Speech & Language, 47:175-193.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Dialog acts classification for question-answer corpora", |
|
"authors": [ |
|
{ |
|
"first": "Saurabh", |
|
"middle": [], |
|
"last": "Chakravarty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raja", |
|
"middle": [], |
|
"last": "Venkata Satya Phanindra Chava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward A", |
|
"middle": [], |
|
"last": "Fox", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ASAIL@ ICAIL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saurabh Chakravarty, Raja Venkata Satya Phanindra Chava, and Edward A Fox. 2019. Dialog acts clas- sification for question-answer corpora. In ASAIL@ ICAIL.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Semantically Conditioned Dialog Response Generation via Hierarchical Disentangled Self-Attention", |
|
"authors": [ |
|
{ |
|
"first": "Wenhu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianshu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pengda", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xifeng", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"Yang" |
|
], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3696--3709", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1360" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wenhu Chen, Jianshu Chen, Pengda Qin, Xifeng Yan, and William Yang Wang. 2019. Semantically Con- ditioned Dialog Response Generation via Hierarchi- cal Disentangled Self-Attention. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3696-3709, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Dialogue Act Recognition via CRF-Attentive Structured Network", |
|
"authors": [ |
|
{ |
|
"first": "Zheqian", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rongqin", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhou", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deng", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaofei", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR '18", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "225--234", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3209978.3209997" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zheqian Chen, Rongqin Yang, Zhou Zhao, Deng Cai, and Xiaofei He. 2018. Dialogue Act Recognition via CRF-Attentive Structured Network. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR '18, pages 225-234, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Switchboard SWBD-DAMSL Shallow-Discourse-Function Annotation Coders Manual", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liz", |
|
"middle": [], |
|
"last": "Shriberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Debra", |
|
"middle": [], |
|
"last": "Biasca", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Jurafsky, Liz Shriberg, and Debra Biasca. 1997. Switchboard SWBD-DAMSL Shallow-Discourse- Function Annotation Coders Manual.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Recurrent Convolutional Neural Networks for Discourse Compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Nal", |
|
"middle": [], |
|
"last": "Kalchbrenner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Workshop on Continuous Vector Space Models and Their Compositionality", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "119--126", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent Convolutional Neural Networks for Discourse Com- positionality. In Proceedings of the Workshop on Continuous Vector Space Models and Their Compo- sitionality, pages 119-126.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "ProSeqo: Projection Sequence Networks for On-Device Text Classification", |
|
"authors": [ |
|
{ |
|
"first": "Zornitsa", |
|
"middle": [], |
|
"last": "Kozareva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sujith", |
|
"middle": [], |
|
"last": "Ravi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3894--3903", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1402" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zornitsa Kozareva and Sujith Ravi. 2019. ProSeqo: Projection Sequence Networks for On-Device Text Classification. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 3894-3903, Hong Kong, China. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "OpenSubti-tles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles", |
|
"authors": [ |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Lison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jorg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pierre Lison and Jorg Tiedemann. 2016. OpenSubti- tles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation, page 7.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Pretraining Methods for Dialog Context Representation Learning", |
|
"authors": [ |
|
{ |
|
"first": "Shikib", |
|
"middle": [], |
|
"last": "Mehri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evgeniia", |
|
"middle": [], |
|
"last": "Razumovskaia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tiancheng", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxine", |
|
"middle": [], |
|
"last": "Eskenazi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3836--3845", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1373" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shikib Mehri, Evgeniia Razumovskaia, Tiancheng Zhao, and Maxine Eskenazi. 2019. Pretraining Methods for Dialog Context Representation Learn- ing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3836-3845, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Distributed Representations of Words and Phrases and their Compositionality. NIPS Proceedings", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed Representa- tions of Words and Phrases and their Compositional- ity. NIPS Proceedings, page 9.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Glove: Global Vectors for Word Representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/D14-1162" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7--14", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-4302" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To Tune or Not to Tune? Adapt- ing Pretrained Representations to Diverse Tasks. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 7-14, Florence, Italy. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Deep dialog act recognition using multiple token, segment, and context information representations", |
|
"authors": [ |
|
{ |
|
"first": "Eug\u00e9nio", |
|
"middle": [], |
|
"last": "Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ricardo", |
|
"middle": [], |
|
"last": "Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Martins De Matos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "66", |
|
"issue": "", |
|
"pages": "861--899", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eug\u00e9nio Ribeiro, Ricardo Ribeiro, and David Martins de Matos. 2019. Deep dialog act recognition using multiple token, segment, and context information representations. Journal of Artificial Intelligence Re- search, 66:861-899.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Stolcke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Ries", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Coccaro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Shriberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Bates", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rachel", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carol", |
|
"middle": [], |
|
"last": "Van Ess-Dykema", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie", |
|
"middle": [], |
|
"last": "Meteer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Computational Linguistics", |
|
"volume": "26", |
|
"issue": "3", |
|
"pages": "339--373", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/089120100561737" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andreas Stolcke, Klaus Ries, Noah Coccaro, Eliza- beth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue Act Modeling for Au- tomatic Tagging and Recognition of Conversational Speech. Computational Linguistics, 26(3):339-373.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "How to Fine-Tune BERT for Text Classification?", |
|
"authors": [ |
|
{ |
|
"first": "Chi", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xipeng", |
|
"middle": [], |
|
"last": "Qiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yige", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuanjing", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Chinese Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "194--206", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-030-32381-3_16" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to Fine-Tune BERT for Text Classi- fication? In Chinese Computational Linguistics, Lecture Notes in Computer Science, pages 194-206, Cham. Springer International Publishing.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A Hierarchical Neural Model for Learning Sequences of Dialogue Acts", |
|
"authors": [ |
|
{ |
|
"first": "Ingrid", |
|
"middle": [], |
|
"last": "Quan Hung Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gholamreza", |
|
"middle": [], |
|
"last": "Zukerman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Haffari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "428--437", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/E17-1041" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quan Hung Tran, Ingrid Zukerman, and Gholamreza Haffari. 2017a. A Hierarchical Neural Model for Learning Sequences of Dialogue Acts. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 428-437, Valencia, Spain. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Preserving Distributional Information in Dialogue Act Classification", |
|
"authors": [ |
|
{ |
|
"first": "Ingrid", |
|
"middle": [], |
|
"last": "Quan Hung Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gholamreza", |
|
"middle": [], |
|
"last": "Zukerman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Haffari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2151--2156", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1229" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quan Hung Tran, Ingrid Zukerman, and Gholamreza Haffari. 2017b. Preserving Distributional Informa- tion in Dialogue Act Classification. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2151-2156, Copenhagen, Denmark. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Comparison of Transfer-Learning Approaches for Response Selection in Multi-Turn Conversations", |
|
"authors": [ |
|
{ |
|
"first": "Jesse", |
|
"middle": [], |
|
"last": "Vig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kalai", |
|
"middle": [], |
|
"last": "Ramea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Workshop on Dialog System Technology Challenges", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jesse Vig and Kalai Ramea. 2019. Comparison of Transfer-Learning Approaches for Response Selec- tion in Multi-Turn Conversations. In Proceedings of the Workshop on Dialog System Technology Chal- lenges, page 7, Honolulu, Hawaii.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Transformers: State-of-the-Art Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Remi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Davison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Shleifer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Patrick Von Platen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Canwen", |
|
"middle": [], |
|
"last": "Plu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teven", |
|
"middle": [ |
|
"Le" |
|
], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Scao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariama", |
|
"middle": [], |
|
"last": "Gugger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Drame", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "38--45", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-demos.6" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-Art Natural Language Process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "A unified neural architecture for joint dialog act segmentation and recognition in spoken dialog system", |
|
"authors": [ |
|
{ |
|
"first": "Tianyu", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tatsuya", |
|
"middle": [], |
|
"last": "Kawahara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "201--208", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tianyu Zhao and Tatsuya Kawahara. 2018. A unified neural architecture for joint dialog act segmentation and recognition in spoken dialog system. In Pro- ceedings of the 19th Annual SIGdial Meeting on Dis- course and Dialogue, pages 201-208.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"content": "<table><tr><td>: Comparison between Switchboard and the</td></tr><tr><td>AMI Meeting Corpus</td></tr><tr><td>model's encoder layers are optimized during train-</td></tr><tr><td>ing for the target task.</td></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"content": "<table><tr><td/><td colspan=\"2\">SWDA</td><td colspan=\"2\">AMI-DA</td></tr><tr><td/><td>F1</td><td>acc.</td><td>F1</td><td>acc.</td></tr><tr><td>BERT-FZ</td><td colspan=\"4\">7.75 55.61 14.86 48.34</td></tr><tr><td>BERT+ID-FZ</td><td colspan=\"4\">6.46 52.30 14.48 48.18</td></tr><tr><td>BERT+CC-FZ</td><td colspan=\"4\">5.76 51.14 11.34 40.48</td></tr><tr><td>BERT+OS-FZ</td><td colspan=\"4\">9.60 57.67 17.03 51.03</td></tr><tr><td>BERT-RI</td><td colspan=\"4\">32.18 73.80 34.88 60.89</td></tr><tr><td>Majority class</td><td colspan=\"2\">0.78 33.56</td><td colspan=\"2\">1.88 28.27</td></tr><tr><td>SotA</td><td/><td>-83.1 6</td><td>-</td><td>-</td></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "BERT-FT 36.75 76.60 43.42 64.93 BERT+ID-FT 43.63 77.01 46.70 68.88 BERT+CC-FT 47.78 77.35 48.86 68.79 BERT+OS-FT 41.42 76.95 48.65 68.07" |
|
} |
|
} |
|
} |
|
} |